repo
stringlengths 8
116
| tasks
stringlengths 8
117
| titles
stringlengths 17
302
| dependencies
stringlengths 5
372k
| readme
stringlengths 5
4.26k
| __index_level_0__
int64 0
4.36k
|
---|---|---|---|---|---|
fkunneman/Product_review_summary | ['sentiment analysis', 'aspect based sentiment analysis'] | ['Aspect-based summarization of pros and cons in unstructured product reviews'] | synpat.py evaluation1.py baseline.py summary_overlap return_overlap get_overlap resolve_overlap score_overlaps predict_summary extract_ngrams return_best_overlap score_overlap return_distance match_empty evaluate_alignment align_sentences return_distancetable assess_phrase check_subjectivity extract_pos score_polarity split set append summary_overlap SequenceMatcher find_longest_match len list zip items list search append max score_overlap append return_best_overlap mean len append append sorted return_distancetable mean max len deepcopy remove extend append enumerate list set check_subjectivity sorted | # Product_review_summary This repository harbours scripts that were used for the study described in the Coling 2018 paper 'Aspect-based summarization of pros and cons in unstructured product reviews'. The scripts 'baseline.py' and 'synpat.py' are implementations of two of the systems that are evaluated in the paper. The script 'evaluation1.py' is the implementation of the first evaluation, that matches system output to the pros and cons that are put forward by the authors of the reviews. All scripts can be applied with python 3.x, and take json-formatted review text as basic input. These files can not be shared publicly. Please contact Florian Kunneman ([email protected]) if you are interested in the data or have questions about the code. ### Usage of baseline.py python baseline.py train.json dev.json test.json baseline_predictions.json ### Usage of synpat.py python synpat.py test.json duoman.txt synpat_predictions.txt synpat_predictions.json ### Usage of evaluation1.py python evaluation1.py synpat_predictions.json human_gold_standard.json 70 pattern_pros pattern_cons pattern_results.json pattern_results_summary.csv | 2,100 |
flipkart-incubator/fk-visual-search | ['image retrieval'] | ['Deep Learning based Large Scale Visual Recommendation and Search for E-Commerce'] | scripts/extract_features.py scripts/create_wtbi_crops.py scripts/sampler.py scripts/__init__.py scripts/indexer.py scripts/image_downloader.py scripts/cuda_knn.py scripts/compute_recall.py scripts/feature_extractor.py scripts/create_structured_images.py compute_recall compute_nn construct_fv_map_from_lmdbs FeatureExtractor URLObject ParallelImageDownloader Indexer sample join print set add open print zeros list open Device push values open str list sorted mem_alloc ceil range SourceModule get_function nbytes make_context prepare zip float pop int memcpy_htod time items print argpartition zeros prepared_call memcpy_dtoh len join str add set append randint range len | # fk-visual-search This code allows you to train the Visnet model. Visnet, trained on Flipkart's proprietary internal dataset, powers Visual Recommendations at Flipkart. On the publically available dataset, [Street2Shop](http://tamaraberg.com/street2shop/), Visnet achieves state-of-the-art results. [Here](https://arxiv.org/abs/1703.02344) is the link to the arXiv tech report. In this Repo, we have open-sourced the following: * Training prototxts of Visnet * Triplet sampling code, to generate the training files * A CUDA based fast K-Nearest Neighbor Search library * Other auxillary scripts, such as code to process [Street2Shop](http://tamaraberg.com/street2shop/) dataset, sampling triplets, etc. We soon plan to add other useful scripts, such as: * Our useful modifications over Caffe - the image augmentation layer, and triplet accuracy layer to aid the training of Visnet ## Visnet Architecture | 2,101 |
florex/xai-cnn-lrp | ['text classification'] | ['Simplifying the explanation of deep neural networks with sufficient and necessary feature-sets: case of text classification'] | evaluate_coherence.py sequential_cnn.py process_sent140.py lime.py process_data.py explainer.py save_sentences_to_file.py process_qa.py analyse_merged.py lime_vs_lrpa.py process_imdb.py text_cnn.py produce_similar_sentences.py preprocessor.py process_tripa.py text_cnn_old.py explain_sentence.py explainer_v1.py lime_time.py train_1d_cnn.py evaluate_fidelity.py explain_cnn.py load_json_file load_model to_phrase load_json_file load_model truncate TextCNNExplainer truncate TextCNNExplainer load_model load_model new_predict load_model new_predict load_model load_json_file Preprocessor load_labels_ids jsontocsv load_model load_json_file TextCNN TextCNN read close model_from_json open load close open set sort list split print get_pad_sequence isfile writer glob writerow close sent_tokenize open | # xai-cnn-lrp This repository contains codes to explain One-Dimensional Convolutional Neural Networks (1D-CNN) using Layer-wise Relevance Propagation. The explanation technique consists in computing the relevance of the various n-gram features and determining sufficient and necessary n-grams. Codes developed in this project were designed for experimental purposes and cannot be used in the state to handle all the types of 1D-CNN architecture without any adaptation. The project comes with a multi-channel 1D-CNN model generator which can be used to generate testing models. The method implemented in this repository is detailed in this article : https://arxiv.org/abs/2010.03724 # Dependencies : - Anaconda (python 3.6) - keras (tested on 2.2.4) - tensorflow (1.13.1) - numpy (1.16) - pandas (0.24) The project contains 4 main directories : | 2,102 |
florianmai/word2mat | ['word embeddings'] | ['CBOW Is Not All You Need: Combining CBOW with the Compositional Matrix Space Model'] | mutils.py cbow.py word2mat.py wrap_evaluation.py data/extract_umbc.py evaluate_word2mat.py train_cbow.py get_index_batch recursive_file_list get_word_dict get_wordembedding _load_texts CBOWNet CBOWDataset _generate_texts get_wordvec_batch get_index_vocab tokenize sentenize build_vocab _batcher_helper batcher _load_encoder_and_eval prepare get_params_parser get_optimizer _update_options _all_option_combinations run_hyperparameter_optimization dotdict _make_space write_to_csv run_experiment _batcher_helper batcher_cbow prepare get_params_parser Word2MatEncoder get_cbow_cmow_hybrid_encoder get_cbow_encoder get_cmow_encoder HybridEncoder _get_score_for_name _run_experiment_and_save construct_model_name _load_embeddings_from_word2vec _evaluate_downstream_and_probing_tasks _add_common_arguments run_and_evaluate _save_embeddings_to_word2vec array zeros max range len array append zeros max range len tokenize print format vstack len format print Counter most_common sum format get_word_dict print get_index_vocab get_wordembedding len recursive_file_list recursive_file_list get_index_batch Variable t eval numpy cuda add_argument ArgumentParser time hstack included_features add_start_end_token sum load print items list setattr append product enumerate optimization optimized_experiment maximize _all_option_combinations explore fmin _make_space BayesianOptimization items str reader write close output_file open next flush enumerate Adagrad Adadelta Adamax Adam RMSprop SGD SparseAdam Rprop float ASGD split get __delitem__ __setitem__ unigram_dist batch_size precomputed_chunks_dir max_words word_emb_dim dataset_path get_cbow_cmow_hybrid_encoder DataLoader DataParallel temp_path construct_model_name outputmodelname get_optimizer cuda seed open list trainepoch num_docs CBOWNet CBOWDataset optim_fn word_vec context_size precomputed_word_vocab module range outputdir minlr dump format recursive_file_list num_samples_per_item shuffle num_training_samples validation_fraction manual_seed n_epochs float optimizer load int join _word_vec_count_tuple print parameters get_cbow_encoder mode linear_decay get_cmow_encoder len construct_model_name outputmodelname vocabulary _batcher_helper Word2MatEncoder Word2MatEncoder get_cmow_encoder get_cbow_encoder HybridEncoder warn update join run_experiment _get_score_for_name downstream_eval _evaluate_downstream_and_probing_tasks word_embedding_eval output_file _save_embeddings_to_word2vec save write_to_csv construct_model_name outputmodelname outputdir load_embedding join print write close lookup_table numpy range outputdir open SE eval exit downstream_tasks vars add_argument_group add_argument _run_experiment_and_save _add_common_arguments get_params_parser optimization run_hyperparameter_optimization parse_args | # word2mat *Word2Mat* is a framework that learns *sentence embeddings* in a CBOW-word2vec style, but where the words and sentences are represented as matrices. Details of this method and results can be found in our [ICLR paper](https://openreview.net/forum?id=H1MgjoR9tQ). ## Dependencies - Python3 - PyTorch >= 0.4 with CUDA support - NLTK >= 3 ## Setup python3 environment Please install the python3 dependencies in your environment: ``` | 2,103 |
flowersteam/teachDeepRL | ['active learning'] | ['Active Learning of Inverse Models with Intrinsically Motivated Goal Exploration in Robots'] | teachDRL/spinup/utils/logx.py teachDRL/toy_env/toy_env.py teachDRL/teachers/utils/dataset.py teachDRL/gym_flowers/__init__.py teachDRL/spinup/algos/ddpg/ddpg.py teachDRL/spinup/algos/vpg/vpg.py teachDRL/spinup/exercises/problem_set_1_solutions/exercise1_2_soln.py teachDRL/spinup/user_config.py teachDRL/spinup/utils/mpi_tf.py teachDRL/spinup/utils/run_entrypoint.py teachDRL/spinup/algos/sac/core.py teachDRL/spinup/exercises/common.py teachDRL/spinup/algos/ddpg/core.py teachDRL/spinup/algos/td3/core.py teachDRL/spinup/algos/td3/td3.py run.py teachDRL/teachers/utils/plot_utils.py teachDRL/spinup/exercises/problem_set_1/exercise1_3.py teachDRL/spinup/utils/normalization_utils.py teachDRL/spinup/exercises/problem_set_1/exercise1_1.py teachDRL/spinup/exercises/problem_set_1/exercise1_2.py teachDRL/spinup/exercises/problem_set_1_solutions/exercise1_1_soln.py teachDRL/spinup/algos/ppo/core.py teachDRL/teachers/algos/oracle_teacher.py teachDRL/spinup/examples/pg_math/2_rtg_pg.py teachDRL/spinup/utils/plot.py teachDRL/spinup/utils/mpi_tools.py teachDRL/teachers/algos/alp_gmm.py teachDRL/spinup/algos/ppo/ppo.py teachDRL/teachers/algos/covar_gmm.py teachDRL/teachers/teacher_controller.py teachDRL/teachers/algos/random_teacher.py teachDRL/gym_flowers/envs/__init__.py teachDRL/spinup/algos/sac/sac.py teachDRL/spinup/utils/serialization_utils.py teachDRL/spinup/exercises/problem_set_2/exercise2_2.py teachDRL/spinup/examples/bench_ppo_cartpole.py teachDRL/teachers/utils/test_utils.py teachDRL/test_bipedal_walker_continuous.py teachDRL/spinup/examples/pg_math/1_simple_pg.py teachDRL/teachers/algos/riac.py teachDRL/spinup/utils/test_policy.py teachDRL/spinup/examples/train_mnist.py teachDRL/spinup/utils/run_utils.py teachDRL/spinup/__init__.py teachDRL/spinup/algos/trpo/trpo.py teachDRL/teachers/uniform_test_sets_generator.py teachDRL/spinup/exercises/problem_set_2/exercise2_3.py setup.py teachDRL/spinup/run.py teachDRL/spinup/algos/trpo/core.py teachDRL/gym_flowers/envs/bipedal_walker_continuous.py teachDRL/spinup/algos/vpg/core.py BipedalWalkerContinuous ContactDetector Rotate2D friendly_err parse_and_execute_grid_search mlp mlp_actor_critic count_vars placeholders placeholder get_vars ddpg ReplayBuffer mlp discount_cumsum gaussian_likelihood mlp_actor_critic count_vars combined_shape mlp_gaussian_policy placeholders_from_spaces placeholders placeholder placeholder_from_space mlp_categorical_policy get_vars ppo PPOBuffer mlp clip_but_pass_gradient apply_squashing_func gaussian_likelihood mlp_actor_critic count_vars mlp_gaussian_policy placeholders placeholder get_vars sac ReplayBuffer mlp mlp_actor_critic count_vars placeholders placeholder get_vars td3 ReplayBuffer values_as_sorted_list flat_grad mlp_actor_critic flat_concat placeholder_from_space mlp hessian_vector_product categorical_kl count_vars combined_shape placeholder mlp_categorical_policy keys_as_sorted_list diagonal_gaussian_kl placeholders assign_params_from_flat discount_cumsum gaussian_likelihood mlp_gaussian_policy placeholders_from_spaces get_vars GAEBuffer trpo mlp discount_cumsum gaussian_likelihood mlp_actor_critic count_vars combined_shape mlp_gaussian_policy placeholders_from_spaces placeholders placeholder placeholder_from_space mlp_categorical_policy get_vars vpg VPGBuffer mlp train_mnist mlp train mlp train reward_to_go print_result gaussian_likelihood td3 ReplayBuffer gaussian_likelihood mlp mlp_gaussian_policy gaussian_likelihood td3_with_actor_critic td3 restore_tf_graph colorize EpochLogger Logger MpiAdamOptimizer flat_concat sync_all_params assign_params_from_flat sync_params allreduce mpi_op proc_id msg mpi_avg mpi_fork mpi_statistics_scalar num_procs mpi_sum broadcast MaxMinFilter get_all_datasets get_datasets make_plots main plot_data test_eg setup_logger_kwargs valid_str call_experiment ExperimentGrid all_bools convert_json is_json_serializable param_dict_to_param_vec param_vec_to_param_dict TeacherController ALPGMM proportional_choice EmpiricalALPComputer proportional_choice CovarGMM OracleTeacher RandomTeacher proportional_choice RIAC Region Databag Dataset BufferedDataset plot_gmm draw_competence_grid plt_2_rgb truncate_colormap unscale_vector draw_ellipse gmm_plot_gif plot_regions region_plot_gif scatter_plot scale_vector random_plot_gif get_test_set_name get_empty_env_ranges test_riac test_random test_covar_gmm ToyEnv test_alpgmm array items list print exit add dict lstrip eval any __doc__ dedent run append process ExperimentGrid keys enumerate dense get_vars tuple set_random_seed log_tabular stop_gradient save_state Session run seed locals ReplayBuffer test_agent sample_batch range EpochLogger save_config group placeholders sample time minimize print action_space AdamOptimizer reduce_mean setup_tf_saver global_variables_initializer get_action step store dump_tabular isinstance exp log pi mlp list one_hot log_softmax squeeze reduce_sum multinomial n mlp list exp gaussian_likelihood shape random_normal get_variable actor_critic tuple where set_random_seed log_tabular save_state num_procs log Session run seed locals exp shape cast range update EpochLogger save_config placeholders sync_all_params env_fn int time PPOBuffer minimize print action_space placeholders_from_spaces float32 observation_space finish_path reduce_mean logical_or setup_tf_saver global_variables_initializer step store dump_tabular cast float32 dense tanh tuple set_random_seed my_init log_tabular output_dir stop_gradient save_state set_env_params Session run seed locals ReplayBuffer get_action test_agent sample_batch range dump EpochLogger save_config group astype placeholders sample ConfigProto minimum time record_train_episode minimize print action_space AdamOptimizer reduce_mean reset setup_tf_saver global_variables_initializer get_vars step store dump_tabular tuple set_random_seed log_tabular stop_gradient save_state Session run seed locals ReplayBuffer test_agent sample_batch range EpochLogger save_config group placeholders sample minimum time minimize print action_space AdamOptimizer reduce_mean setup_tf_saver global_variables_initializer get_action step store dump_tabular reduce_sum exp reduce_sum flat_grad float32 placeholder split categorical_kl placeholder diagonal_gaussian_kl placeholders actor_critic values_as_sorted_list flat_grad tuple flat_concat set_random_seed log_tabular save_state num_procs log Session run seed locals exp hessian_vector_product shape range update EpochLogger GAEBuffer save_config placeholders sync_all_params assign_params_from_flat env_fn int time minimize print action_space placeholders_from_spaces observation_space finish_path reduce_mean setup_tf_saver global_variables_initializer get_vars step store dump_tabular actor_critic tuple set_random_seed log_tabular save_state num_procs log Session run seed locals shape range update EpochLogger save_config placeholders VPGBuffer sync_all_params env_fn int time minimize print action_space placeholders_from_spaces observation_space finish_path reduce_mean setup_tf_saver global_variables_initializer step store dump_tabular randint log_tabular save_state argmax Session run mlp locals placeholder cast range softmax_cross_entropy dump_tabular EpochLogger one_hot save_config equal time minimize reshape float32 reduce_mean setup_tf_saver load_data int32 global_variables_initializer store len mlp make one_hot minimize print log_softmax squeeze placeholder reduce_sum multinomial train_one_epoch run global_variables_initializer range InteractiveSession n list zeros_like reversed range len print td3 append str load join update dict get_default_graph float32 flat_concat py_func update exit copy check_call print str Bcast allreduce asarray zeros_like mpi_op sqrt sum array mpi_sum asarray convolve isinstance ones concat tight_layout set tsplot ticklabel_format draggable len load join str columns read_table insert len append walk open print listdir dirname zip show get_all_datasets getattr figure plot_data value add_argument make_plots xaxis ArgumentParser legend parse_args logdir count dict join strftime decode join print colorize setup_logger_kwargs dumps check_call convert_json dirname abspath dedent join lower hasattr ExperimentGrid add isinstance is_json_serializable dumps OrderedDict items enumerate append items list sum array squeeze squeeze reshape frombuffer draw tostring_rgb from_list format cmap linspace subplots plot set_xlabel set_xlim axis set_ylabel set_ylim set_aspect set_label subplots high set_xlabel set_xlim add_patch set_ylabel ColorbarBase Rectangle make_axes zip jet tick_params low set_ylim ioff format subplots suptitle print plt_2_rgb close mimsave gca plot_regions mkdir scatter_plot append range len svd arctan2 Ellipse degrees add_patch sqrt range set_label transpose set_label_position ColorbarBase set_ticks_position make_axes pcolor set_aspect set_label truncate_colormap set_yticks set_xlim set_xlabel draw_ellipse autumn_r scatter ColorbarBase set_ticks_position make_axes zip set_label_position tick_params set_ylabel max set_ylim plt_2_rgb plot_gmm draw_lineworld_info savefig gca append format close mkdir zip enumerate ioff int draw_competence_grid suptitle print mimsave figure array len plt_2_rgb tick_params set_aspect scatter savefig append gca format set_xlim close mkdir zip enumerate ioff draw_competence_grid suptitle print mimsave figure array set_ylim print items list get_score cube_competence RIAC format update regions_bounds sampled_tasks print regions_alp copy episode region_plot_gif append range array sample_task get_score cube_competence update format print ALPGMM copy gmm_plot_gif episode append range array sample_task tasks_alps get_score cube_competence update format print CovarGMM tasks_times_rewards copy gmm_plot_gif episode append range array sample_task get_score cube_competence format random_plot_gif print random copy params episode append range | flowersteam/teachDeepRL | 2,104 |
fly519/ELGS | ['graph attention', 'semantic segmentation'] | ['Exploiting Local and Global Structure for Point Cloud Semantic Segmentation with Contextual Point Representations'] | models/sem_seg_model.py tf_ops/3d_interpolation/tf_interpolate_op_test.py models/train.py tf_ops/3d_interpolation/visu_interpolation.py tf_ops/grouping/tf_grouping_op_test.py tf_ops/sampling/tf_sampling.py indoor3d_util.py gen_indoor3d_h5.py collect_indoor3d_data.py tf_ops/grouping/tf_grouping.py utils/show3d_balls.py utils/tf_util.py utils/pointnet_gab_util.py utils/pc_util.py models/test_all_models.py models/test.py utils/provider_p2.py tf_ops/3d_interpolation/tf_interpolate.py models/train_areas.py insert_batch collect_point_label point_label_to_obj room2blocks_plus_normalized room2samples_wrapper_normalized sample_data room2samples_plus_normalized room2blocks_wrapper room2blocks bbox_label_to_obj_room collect_point_bounding_box room2blocks_wrapper_normalized room2samples bbox_label_to_obj room2blocks_plus sample_data_label collect_bounding_box gating_process get_model get_loss placeholder_inputs get_learning_rate log_string eval train get_bn_decay get_learning_rate log_string eval train get_bn_decay get_learning_rate eval_one_epoch log_string train_one_epoch train get_bn_decay get_learning_rate eval_one_epoch log_string train_one_epoch train get_bn_decay three_nn three_interpolate _three_interpolate_grad GroupPointTest fun query_ball_point group_point select_top_k _group_point_grad knn_point GroupPointTest farthest_point_sample gather_point _gather_point_grad prob_sample point_cloud_to_volume_v2 write_ply pyplot_draw_point_cloud write_ply_color draw_point_cloud read_ply point_cloud_to_volume_v2_batch point_cloud_to_image_batch point_cloud_three_views_demo point_cloud_to_volume pyplot_draw_volume point_cloud_to_volume_batch point_cloud_three_views volume_to_point_cloud point_cloud_to_image sample_and_group pointnet_fp_module gab_module pointnet_sa_module_withgab gating_process pointnet_sa_module_msg sample_and_group_all rotate_point_cloud load_h5_data_label_seg loadDataFile getDataFiles load_h5 rotate_point_cloud_by_angle shuffle_data jitter_point_cloud loadDataFile_with_seg onmouse showpoints batch_norm_template batch_norm_for_conv1d conv2d_transpose dropout fully_connected conv3d batch_norm_for_conv2d batch_norm_for_fc batch_norm_for_conv3d avg_pool2d conv2d conv1d avg_pool3d max_pool3d max_pool2d _variable_with_weight_decay batch_norm_template_unused _variable_on_cpu str format print write save_h5 flush join concatenate ones loadtxt glob print exit write close save append range open loadtxt write astype close range open choice sample_data int uniform ceil expand_dims sample_data_label range append len uint8 astype print load exit loadtxt uint8 min astype room2blocks zeros max range print load exit loadtxt int arange min shuffle choice ceil zeros float range uint8 astype room2samples zeros max range print load exit loadtxt join concatenate glob loadtxt write close append amin expand_dims range amax open str basename loadtxt write astype close array range open basename loadtxt astype write close array range amax open join concatenate ones loadtxt glob print exit write close save append amin range amax open int32 float32 placeholder pointnet_fp_module query_ball_point concat reduce_max gating_process transpose squeeze matmul conv2d conv1d _variable_with_weight_decay expand_dims value dropout group_point softmax tile reshape pointnet_sa_module_withgab sparse_softmax_cross_entropy add_to_collection scalar print write flush exponential_decay maximum minimum exponential_decay str sum print reshape len log_string now extend unique add_summary append zeros float argmax range enumerate run str sum log_string now shuffle_data jitter_point_cloud add_summary float argmax range run str sum log_string now add_summary float argmax range run value print reshape slice reduce_sum select_top_k tile squeeze point_cloud_to_volume flatten append expand_dims range zeros float astype append vstack array range append point_cloud_to_volume_v2 expand_dims range tuple astype choice pad vstack append zeros float array range append expand_dims range point_cloud_to_image tuple astype choice pad vstack append zeros float array range data read array write array describe int exp abs transpose min mean sqrt argsort round argwhere zeros sum max range euler2mat concatenate draw_point_cloud fromarray uint8 read_ply save point_cloud_three_views set_xlabel add_subplot scatter set_ylabel figure set_zlabel pyplot_draw_point_cloud volume_to_point_cloud write astype close max range open query_ball_point group_point gather_point concat farthest_point_sample knn_point constant value reshape concat tile expand_dims arange shuffle len reshape cos pi dot shape uniform sin zeros array range reshape cos dot shape sin zeros array range shape clip randn File File float imwrite waitKey exit mean imshow render zeros require max multiply add_to_collection xavier_initializer _variable_on_cpu l2_loss truncated_normal_initializer | # Exploiting Local and Global Structure for Point Cloud Semantic Segmentation with Contextual Point Representations Code for the paper: Exploiting Local and Global Structure for Point Cloud Semantic Segmentation with Contextual Point Representations ## Introduction we propose one novel model for point cloud semantic segmentation, which exploit the local and global structures within the point cloud based on the contextual point representations. Specifically, we enrich each point representation by performing one novel gated fusion on the point itself and its contextual points. Afterwards, based on the enriched representation, we propose one novel graph pointnet module (GPM), relying on the graph attention block (GAB) to dynamically compose and update each point representation within the local point cloud structure. Finally, we resort to the spatial-wise and channel-wise attention strategies to exploit the point cloud global structure and thereby yield the semantic label for each point. ## Data download and process We provide the processed files, you can download S3DIS data <a href="https://1drv.ms/u/s!AjxFyWxg5usOajIvRkNnDLOnT3M?e=mmhCMf">here</a> . To prepare your own S3DIS Dataset HDF5 files, refer to <a href="https://github.com/charlesq34/pointnet">PointNet</a>, you need to firstly download <a href="http://buildingparser.stanford.edu/dataset.html">3D indoor parsing dataset version 1.2</a> (S3DIS Dataset) and convert original data to data label files by ```bash python collect_indoor3d_data.py ``` Finally run | 2,105 |
flytxtds/AutoGBT | ['automl'] | ['Automatically Optimized Gradient Boosting Trees for Classifying Large Volume High Cardinality Data Streams Under Concept Drift'] | StreamProcessor.py model.py libscores.py f1_multiclass_score write_scores pac_multilabel r2_score_ pac_metric show_platform f1_multilabel bac_metric bac_binary auc_binary bac_multiclass sanitize_array compute_all_scores show_all_scores f1_metric auc_score_ write_list show_io npac_multiclass_score npac_binary_score log_loss acc_stat read_array f1_binary nbac_multiclass_score get_info pac_binary log_loss_ nbac_binary_score show_version binarize_predictions prior_log_loss mkdir abs_regression normalize_array ls f1_binary_score a_metric auc_multilabel a_score_ r2_regression mvmean r2_metric bac_multilabel tiedrank auc_metric pac_multiclass setupmgr Model simple_time_tracker GenericStreamPreprocessor Utils _log AutoHyperOptimizer StreamSaveRetrainPredictor genfromtxt reshape nanmax nanmin list ravel nanmax nanmin print copy float ravel zeros argmax range shape multiply sum arange argsort unique empty range len array mvmean abs mvmean mvmean binarize_predictions maximum zeros acc_stat sum max format exp mvmean abs print maximum prior_log_loss shape log_loss empty array range mvmean binarize_predictions maximum zeros acc_stat format print empty tiedrank sum range minimum binarize_predictions maximum copy shape sum range sum maximum log float abs mvmean roc_auc_score swrite makedirs load str list items join write_list pwd swrite ls open str sorted list swrite map linux_distribution swrite sorted normalize_array scoring_func sanitize_array keys str list print write encode keys str list print compute_all_scores keys print format | flytxtds/AutoGBT | 2,106 |
fmahoudeau/MiCT-RANet-ASL-FingerSpelling | ['sign language recognition'] | ['Fingerspelling recognition in the wild with iterative visual attention'] | webcam.py mictresnet.py test.py mictranet.py chicago_fs_wild.py utils.py ChicagoFSWild Batchify PriorToMap ToTensor Normalize VisualAttentionCell init_lstm_hidden MiCTRANet MiCTBlock _to_5d_tensor get_mictresnet MiCTResNet _to_4d_tensor BasicBlock get_attention_maps test get_optical_flows get_attention_priors main frobenius_norm iterative_levenshtein get_ctc_vocab compute_acc beam_decode calc_attention_prior PlayerWindow main VideoProcessingPipeline denoise_frame transpose squeeze permute cat split stack transpose split load_url MiCTResNet transfer_weights max calcOpticalFlowFarneback zeros_like min nan_to_num append zeros float cartToPolar range len astype mean stack append range len unsqueeze get_attention_maps pop list asarray format beam_decode print synchronize perf_counter get_optical_flows mean get_attention_priors eval append to sum hidden_size enumerate get_ctc_vocab gpu_id DataLoader ArgumentParser device conf load_state_dict parse_args sum get ChicagoFSWild MiCTRANet ConfigParser Compose test getint load beam_size read print add_argument enumerate min range len iterative_levenshtein enumerate list tolist logaddexp log index append range len get uint8 print put fastNlMeansDenoising sleep clip get max pop calcOpticalFlowFarneback zeros_like print min astype put nan_to_num mean stack sleep append float cartToPolar init_lstm_hidden PlayerWindow save_frame destroyAllWindows waitKey terminate popleft to append replace synchronize draw_canvas perf_counter mean eval predict_proba deque greedy_decode acquire_next_frame join frames_window zeros VideoProcessingPipeline makedirs | # MiCT-RANet for ASL Fingerspelling This repository introduces MiCT-RANet, an efficient Deep Neural Network architecture for real-time recognition of ASL fingerspelled video sequences. It achieves **74.4% letter accuracy** on the **ChicagoFSWild+** test set at **229 FPS**. MiCT-RANet is the SOTA at the time of publishing this repository (Oct-2020), and improves the previous best performing model by a whopping 19.5%. A **fingerspelling practice application** using this model is also included in this repository. To our knowldege it is the first fully functional fingerspelling application based only on RGB frames. It resolves a number of limitations found in other applications which are described in the enclosed video.  MiCT-RANet mainly combines research from two recent papers and adds an improved training procedure: * [Mixed 3D/2D Convolutional Tube (MiCT) for Human Action Recognition](https://www.microsoft.com/en-us/research/uploads/prod/2018/05/Zhou_MiCT_Mixed_3D2D_CVPR_2018_paper.pdf): this CVPR'18 paper proposes to augment a 2D ConvNet backbone with a small number of parallel 3D convolutions introduced at key locations. This architecture allows the 3D convolution branches to only learn residual temporal features, which are the motion of objects and persons in videos, to complement the spatial features learned by 2D convolutions. My implementation of MiCT uses a ResNet backbone and is described in detail in this [Medium story](https://towardsdatascience.com/mict-net-for-human-action-recognition-in-videos-3a18e4f97342). * [Fingerspelling Recognition in the Wild with Iterative Visual Attention](https://arxiv.org/pdf/1908.10546.pdf): this ICCV'19 paper introduces the ChicagoFSWild+ dataset, the largest collection to date of ASL fingerspelling videos. The team at University of Chicago achieved 62.3% letter accuracy on this recognition task using recurrent visual attention applied to the features maps of an AlexNet backbone. They developed an iterative training approach which increasingly zooms on the signing hand, thereby eliminating the need for a hand detector. ### Repository Content This repository includes: * Source code of the Mixed 3D/2D Convolutional Tube (MiCT-Net) built on the ResNet backbone. | 2,107 |
fmfi-compbio/coral-basecaller | ['speech recognition'] | ['Nanopore Base Calling on the Edge'] | basecall.py backend.py signal_to_chunks _slice Basecaller batch_process med_mad finalizer_process Coral rescale_signal caller_process Watcher listdir read_files write_output drain_result_queue min max len median absolute clip astype float32 med_mad slice rescale_signal max _slice get call_raw int8 Coral put zeros get zip reshape astype float32 put beam_search append split get signal_to_chunks slice preprocess_signal put append print endswith join isdir get write_output print len | # ONT basecaller running on Coral edge TPU ## Setup and installation First you need [Coral TPU accelerator](https://coral.ai/products/accelerator/) plugged into your USB port. Before proceeding further, ensure that you have python3 (~3.7) installed and that `/bin/env python` refers to it. If you are using `conda`, you can just create environment as `conda create --name myenv python=3.7` and then activate myenv. Alternatively, you can try adjusting `python` to `python3` in `basecall.py` and use `pip3` instead of `pip`. Then install [edge TPU runtime](https://coral.ai/docs/accelerator/get-started#1-install-the-edge-tpu-runtime). We strongly recommend using maximum frequency version and plugging the device into USB3 port. Install requirements via `pip install -r requirements.txt`. Note that we have fixed version for Linux/Ubuntu so if you are on Mac you should edit requirements to correct platform. | 2,108 |
fmohaghegh/OceanWavePrediction | ['audio classification'] | ['Convolutional RNN: an Enhanced Model for Extracting Features from Sequential Data'] | Training.py Testing.py next_batch int reshape append randint range len | # Ocean Wave Prediction This repository uses recurrent neural nets to predict the ocean waves from the previous data. It takes the spatiotemporal data from the simulation, and applies Recurrent Neural Net in the time series data. Each node in the time seris data is the spatial domain. This code is the revised version of CRNN in the paper: Convolutional RNN: an Enhanced Model for Extracting Features from Sequential Data (https://arxiv.org/abs/1602.05875) Calling the below function is equivalnet to applying one CRNN layer. For a deep model with a few CRNN layers, the function should be invoked multiple times. Given a tensor, the function extracts patches of `kernel_size` time-steps, and processed each with one or more recurrent layers. The hidden state of the recurrent neural network is then returned as the feature vector representing the path. Args: | 2,109 |
fmohr/AILIbs | ['automl'] | ['ML-Plan: Automated machine learning via hierarchical planning'] | softwareconfiguration/LearningCurve/lc/LearningCurve.py JAICore/jaicore-processes/testrsc/simple_python_script.py softwareconfiguration/LearningCurve/lc/Plot.py softwareconfiguration/LearningCurve/ipl/app.py JAICore/jaicore-ml/testrsc/ml/scikitwrapper/importfolder_test/test_module_1.py JAICore/jaicore-ml-weka/testrsc/ml/scikitwrapper/importfolder_test/test_module_1.py JAICore/jaicore-processes/testrsc/simple_parameterized_python_script.py JAICore/jaicore-ml/resources/sklearn/sklearn_template.twig.py JAICore/jaicore-ml/resources/sklearn/sklearn_template_windows.twig.py limit_memory ArgsHandler TimeSeriesBasedModel ArffData warn serialize_prediction ModeType serialize_model deserialize_model ProblemType SingleTargetLearningModel My_MLPRegressor My_MLPRegressor extract_anchorpoints extrapolate f_comb pow_4 log_log_linear f_combNew log_power pow_3 exp_4 mmf decomment queryTestData print dump print load print tolist dumps isinstance setrlimit getrlimit RLIMIT_AS extract_anchorpoints predict zip strip | [](https://travis-ci.com/starlibs/AILibs)
[](https://sonarcloud.io/dashboard/index/starlibs.ailibs)
[](https://sonarcloud.io/component_measures?id=starlibs.ailibs&metric=coverage&view=list)
[](https://search.maven.org/#search%7Cgav%7C1%7Cg%3A"ai.libs)
[](https://javadoc.io/doc/ai.libs/jaicore-basic)
# AILibs
AILibs is a modular collection of Java libraries related to automated decision making. It's highlight functionalities are:
* Graph Search ([jaicore-search](https://starlibs.github.io/AILibs/projects/jaicore-search/)): (AStar, BestFirst, Branch & Bound, DFS, MCTS, and more)
| 2,110 |
foamliu/CRNN | ['optical character recognition', 'scene text recognition'] | ['An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition'] | image_aug.py train.py demo.py optimizer.py eval.py test.py models.py extract.py replace.py config.py utils.py test/test_lr.py data_gen.py strLabelConverter image_resize MJSynthDataset extract extract image_aug BidirectionalLSTM CRNN CRNNOptimizer image_resize main train_net train valid get_learning_rate clip_gradient ensure_folder encode_target AverageMeter get_images_for_test accuracy save_checkpoint adjust_learning_rate parse_args draw_str get_logger int BORDER_REPLICATE copyMakeBorder resize round format print extractall close ZipFile augment_images expand_dims str print valid DataLoader save_checkpoint seed end_epoch Adam apply to MJSynthDataset get_logger range SummaryWriter manual_seed float CRNN checkpoint load print min parameters train add_scalar model zero_grad IntTensor to update format LongTensor size item info enumerate criterion backward Variable AverageMeter accuracy step len update format LongTensor criterion info size AverageMeter accuracy tqdm eval item to parse_args train_net param_groups clamp_ save print param_groups data decode view zip float max add_argument ArgumentParser setFormatter getLogger addHandler StreamHandler Formatter DEBUG setLevel putText FONT_HERSHEY_PLAIN makedirs print format | # Convolutional Recurrent Neural Network ## Introduction This is a PyTorch re-implementation of CRNN: Convolutional Recurrent Neural Network ([paper](https://arxiv.org/pdf/1507.05717.pdf)). The features are summarized blow: ## DataSet We use the synthetic dataset ([mjsynth](http://www.robots.ox.ac.uk/~vgg/data/text/)) released by Jaderberg et al. as the training data. The dataset contains 8 millions training images and their corresponding ground truth words. Such images are generated by a synthetic text engine and are highly realistic. <pre> @article{Jaderberg14c, title={Synthetic Data and Artificial Neural Networks for Natural Scene Text Recognition}, author={Jaderberg, M. and Simonyan, K. and Vedaldi, A. and Zisserman, A.}, | 2,111 |
foamliu/MobileFaceNet | ['face verification'] | ['MobileFaceNets: Efficient CNNs for Accurate Real-Time Face Verification on Mobile Devices'] | optimizer.py focal_loss.py pre_process.py megaface.py retinaface/detector.py retinaface/data/config.py retinaface/utils/box_utils.py data_gen.py train.py retinaface/models/net.py retinaface/layers/modules/__init__.py extract.py align_faces.py retinaface/models/retinaface.py retinaface/layers/modules/multibox_loss.py utils.py export.py mobilefacenet.py retinaface/loader.py retinaface/utils/nms/py_cpu_nms.py retinaface/data/wider_face.py retinaface/utils/timer.py retinaface/data/data_augment.py retinaface/layers/functions/prior_box.py image_aug.py demo.py retinaface/data/__init__.py lfw_eval.py config.py retinaface/layers/__init__.py get_reference_facial_points get_affine_transform_matrix FaceWarpException warp_and_crop_face ArcFaceDataset extract FocalLoss extract get_image evaluate show_bboxes get_threshold get_feature accuracy save_aligned lfw_test copy_file transform process error_analysis visualize get_image gen_feature pngtojpg walkdir test read_feature remove_noise write_feature parse_args crop crop_one_image match_result DepthwiseSeparableConv InvertedResidual MobileFaceNet GDConv _make_divisible ConvBNReLU ArcMarginModel MFNptimizer main train_net train align_face clip_gradient draw_bboxes ensure_folder AverageMeter accuracy save_checkpoint adjust_learning_rate get_face_attributes get_central_face_attributes parse_args get_all_face_attributes get_logger select_significant_face RetinafaceDetector remove_prefix load_model check_keys _mirror _distort _crop _expand _resize_subtract_mean _pad_to_square preproc WiderFaceDetection detection_collate PriorBox MultiBoxLoss MobileNetV1 FPN conv_bn1X1 conv_bn conv_bn_no_relu conv_dw SSH LandmarkHead BboxHead ClassHead RetinaFace decode_landm decode nms intersect log_sum_exp matrix_iou jaccard encode_landm center_size matrix_iof match point_form encode Timer py_cpu_nms print astype float32 array int32 max dtype ones hstack float32 lstsq warpAffine T get_reference_facial_points getAffineTransform SimilarityTransform float32 shape estimate get_affine_transform_matrix format print extractall close ZipFile join list print tqdm get_central_face_attributes append range len align_face fromarray transformer to flip get_image copy transform zeros numpy time format print get_feature pi tqdm dot eval append acos clip split ensure_folder pdf linspace str ylabel title savefig legend append format plot mean float int xlabel print hist std split int float split join basename imwrite draw_bboxes tqdm imread int str format print len save_aligned copy_file split append float range enumerate join get_central_face_attributes imwrite align_face join imwrite draw_bboxes resize imread get_all_face_attributes int split append float len extract format evaluate get_threshold print accuracy process walk imwrite replace align_face get_central_face_attributes dirname makedirs format print walkdir tqdm crop_one_image format print walkdir append len fromarray transformer read fromstring unpack open data pack write open join remove replace print strip set add walk exists open join sum print read_feature listdir join imwrite replace imread walk add_argument ArgumentParser int max MobileFaceNet SGD MultiStepLR DataParallel DataLoader save_checkpoint focal_loss max seed end_epoch to get_logger range ArcMarginModel SummaryWriter format manual_seed float checkpoint load ArcFaceDataset print lfw_test train step add_scalar update format metric_fc criterion model backward clip_gradient AverageMeter zero_grad accuracy info item to step enumerate len parse_args train_net print save param_groups clamp_ print param_groups topk size eq expand_as sum warp_and_crop_face reshape imread get_reference_facial_points detect_faces convert float enumerate imread detect_faces select_significant_face detect_faces convert rectangle range circle setFormatter getLogger addHandler StreamHandler Formatter setLevel INFO mkdir keys set load RetinaFace remove_prefix check_keys load_state_dict current_device minimum int all reshape min maximum copy choice shape matrix_iof randrange array range _convert COLOR_HSV2BGR astype copy COLOR_BGR2HSV randrange randint cvtColor int copy shape uniform randrange randint empty shape randrange reshape copy shape empty max astype float32 resize is_tensor isinstance empty append float type enumerate clamp size min expand max intersect expand_as minimum prod maximum all minimum prod maximum all squeeze_ size jaccard encode_landm index_fill_ point_form encode max range log reshape size unsqueeze cat cat cat max mul sort new clamp index_select resize_as_ long append maximum minimum | # MobileFaceNets  PyTorch implementation of MobileFaceNets: Efficient CNNs for Accurate Real-Time Face Verification on Mobile Devices. [paper](https://arxiv.org/abs/1804.07573). ## Performance |Accuracy|LFW|MegaFace|Download| |---|---|---|---| |paper|99.55%|92.59%|| |ours|99.48%|82.55%|[Link](https://github.com/foamliu/MobileFaceNet/releases/download/v1.0/mobilefacenet_scripted.pt)| ## Dataset | 2,112 |
fourmi1995/IronSegExperiment-PSPNet | ['scene parsing', 'semantic segmentation'] | ['Semantic Understanding of Scenes through the ADE20K Dataset'] | train.py network.py tools.py inference.py evaluate.py image_reader.py model.py load main get_arguments read_images_from_disk ImageReader image_scaling read_labeled_image_list random_crop_and_pad_image_and_labels image_mirroring load main get_arguments save PSPNet50 PSPNet101 layer Network load_img decode_labels preprocess prepare_label read_labelcolours load main get_arguments save add_argument ArgumentParser print restore format crop_to_bounding_box where set_shape add_n Saver flipped_eval gather argmax decode_image Session run open checkpoints global_variables PSPNet squeeze resize_bilinear placeholder add shape streaming_mean_iou cast expand_dims get_arguments format get_checkpoint_state preprocess flip_left_right ConfigProto trange local_variables_initializer load int join constant not_equal print reshape model_checkpoint_path read_file int32 global_variables_initializer split less stack boolean_mask reverse to_float resize_images to_int32 squeeze multiply stack random_uniform resize_nearest_neighbor expand_dims pad_to_bounding_box random_crop concat maximum shape cast set_shape append join split open image_scaling concat image_mirroring read_file cast random_crop_and_pad_image_and_labels decode_png decode_jpeg split print join makedirs decode_labels save_dir imsave img_path load_img makedirs shape loadmat constant one_hot reshape matmul read_labelcolours format print exit read_file isfile decode_png decode_jpeg pad_to_bounding_box concat cast expand_dims split set_random_seed save num_classes less_equal get_collection map ylabel title scalar_mul savefig legend append range num_steps start_queue_runners plot sparse_softmax_cross_entropy_with_logits stack random_seed power time learning_rate PSPNet101 snapshot_dir xlabel request_stop Coordinator reduce_mean pow UPDATE_OPS prepare_label bool | # PSPNet_tensorflow ## Introduction This is an implementation of PSPNet in TensorFlow for semantic segmentation on the [cityscapes](https://www.cityscapes-dataset.com/) dataset. We first convert weight from [Original Code](https://github.com/hszhao/PSPNet) by using [caffe-tensorflow](https://github.com/ethereon/caffe-tensorflow) framework. ## Update: ## News (2018.11.08 updated): Now you can try PSPNet on your own image online using [ModelDepot live demo](https://modeldepot.io/hellochick/pspnet)! #### 2018/01/24: 1. `Support evaluation code for ade20k dataset` #### 2018/01/19: | 2,113 |
foxlf823/Multi-Filter-Residual-Convolutional-Neural-Network | ['medical code prediction'] | ['ICD Coding from Clinical Text Using Multi-Filter Residual Convolutional Neural Network'] | elmo/elmo.py models.py elmo/encoder_base.py elmo/lstm_cell_with_projection.py preprocess_mimic2.py options.py train_test.py elmo/elmo_lstm.py elmo/scalar_mix.py utils.py main.py preprocess_mimic3.py elmo/__init__.py CNN OutputLayer ResidualBlock MultiCNN WordRep pick_model MultiResCNN Bert_seq_cls ResCNN train test build_matrix prepare_instance micro_precision _readFloat pad_sequence print_metrics load_embeddings word_embeddings micro_recall next_labels micro_accuracy next_notes macro_f1 write_discharge_summaries all_metrics load_pretrain_emb my_collate_bert save_everything recall_at_k gensim_to_fasttext_embeddings _readString fasttext_embeddings macro_accuracy load_vocab_dict macro_recall my_collate precision_at_k micro_f1 all_macro build_vocab ProcessedIter prepare_instance_bert build_pretrain_embedding all_micro concat_data intersect_size save_metrics load_lookups auc_metrics gensim_to_embeddings reformat MyDataset union_size norm2one split_data macro_precision load_full_codes save_embeddings early_stop _ElmoBiLm _ElmoCharacterEncoder batch_to_ids Elmo ElmoLstm _EncoderBase LstmCellWithProjection ScalarMix CNN load MultiCNN test_model len load_state_dict MultiResCNN Bert_seq_cls cuda gpu ResCNN model print backward step zero_grad iter item append next range len replace concatenate print print_metrics mean eval iter all_metrics range len load build_matrix replace set wv save_embeddings load build_matrix replace set wv save_embeddings items list tqdm word_vec append zeros len array join ProcessedIter print Word2Vec save train build_vocab join ProcessedIter print FastText save train build_vocab join startswith split print join print write set open append int next reader int next reader set defaultdict set vocab MIMIC_2_DIR set data_path load_vocab_dict load_full_codes len from_pretrained bert_dir len zeros enumerate pad_sequence max batch_to_ids pad_sequence max nanargmax print nanargmin save save_metrics cuda gpu state_dict print items list union_size intersect_size sum intersect_size sum intersect_size macro_precision macro_recall micro_recall micro_precision roc_curve mean append ravel range auc append float sum array enumerate len append float sum enumerate update recall_at_k all_micro precision_at_k auc_metrics ravel all_macro str read decode bytes ord read dict sqrt sum square items list format print len sqrt uniform norm2one zeros load_pretrain_emb Batch Vocabulary Instance index_instances TextField ELMoTokenCharactersIndexer append | # Multi Filter Residual Convolutional Neural Network for Text Classification The Multi Filter Residual Convolutional Neural Network (MultiResCNN) is built based on [TextCNN](https://github.com/yoonkim/CNN_sentence), [Residual Network](https://github.com/KaimingHe/deep-residual-networks) and [CAML](https://github.com/jamesmullenbach/caml-mimic). It could be used as a strong baseline model for text classification. The repo can be used to reproduce the results in the [paper](https://arxiv.org/abs/1912.00862): @inproceedings{li2020multirescnn, title={ICD Coding from Clinical Text Using Multi-Filter Residual Convolutional Neural Network}, author={Li, Fei and Yu, Hong}, booktitle={Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence}, year={2020} } | 2,114 |
fpcasale/GPPVAE | ['time series'] | ['Gaussian Process Prior Variational Autoencoders'] | pysrc/faceplace/utils.py pysrc/faceplace/train_vae.py pysrc/faceplace/callbacks.py pysrc/faceplace/train_gppvae.py pysrc/faceplace/vmod.py pysrc/faceplace/process_data.py pysrc/faceplace/vae.py pysrc/faceplace/data_parser.py pysrc/faceplace/gp.py callback callback_gppvae callback_gppvae0 _compose_multi _compose FaceDataset read_face_data generate_data GP main unzip_data import_data split_data backprop_and_update main eval_step encode_Y main train_ep eval_ep smartDumpDictHdf5 unzip smartSum smartAppend smartAppendDict export_scripts download _filename Conv2dCellUp FaceVAE Conv2dCellDown f_act normalize_rows Vmodel append clip range concatenate concatenate append range clip len subplot plot close colorbar imshow title savefig figure _compose subplot plot close colorbar imshow ylim title _compose_multi figure savefig concatenate File close astype float32 in1d unique tensor enumerate dot linspace randn join list File import_data close download_data split_data create_dataset keys print unzip join makedirs join imresize glob sort append zeros imread array enumerate split seed int list permutation arange in1d keys data callback_gppvae outdir DataLoader taylor_coeff K cuda ParameterList seed read_face_data set_trace Adam eval_step epochs smartAppend load_state_dict to range detach encode_Y debug manual_seed backprop_and_update info FaceDataset float load vae_weights extend smartAppendDict parameters cpu numpy eval decode backward cpu float step zero_grad smartSum nll K encode taylor_expansion train sum cuda enumerate callback eval_ep save state_dict train_ep randn Variable forward backward step zero_grad smartSum float cpu train sum enumerate eval append smartAppend list keys list create_dataset keys create_group getcwd join urlretrieve _filename urlparse glob join mkdir copyfile | # GPPVAE Code accompanying the paper [Gaussian Process Prior Variational Autoencoder (GPPVAE)](https://arxiv.org/abs/1810.11738) [1]. The implementation in this repository is in [pytorch](https://pytorch.org/). <p align="center"> <img src="img/gppvae.png" alt="gppvae" width="80%"> </p> [1] Casale FP, Dalca AV, Saglietti L, Listgarten J, Fusi N. Gaussian Process Prior Variational Autoencoders, 32nd Conference on Neural Information Processing Systems, 2018, Montreal, Canada. ## Install dependencies The dependencies can be installed using [anaconda](https://www.anaconda.com/download/): ```bash | 2,115 |
fpthink/PDGN | ['point cloud generation', 'data augmentation'] | ['Progressive Point Cloud Deconvolution Generation Network', 'Diffusion Probabilistic Models for 3D Point Cloud Generation'] | lib/sync_bn/__init__.py lib/sync_bn/unittest.py lib/pointops/pointops/setup.py utils/nn_utils.py datasets_4point.py lib/pointops/setup.py utils/chamfer_2loss.py util/dataset.py lib/sync_bn/comm.py util/scannet.py util/config.py lib/sync_bn/batchnorm.py lib/sync_bn/replicate.py util/__init__.py utils/chamfer_loss.py models/PDGN_v1.py main.py util/util.py util/pt_util.py utils/graph_utils.py lib/pointops/functions/pointops.py utils/provider.py util/transform.py ModelNetDataset pc_normalize PartDataset main parse_args check_folder check_args QueryAndGroup_Dilate GroupAll NearestNeighbor KNNQuery Interpolation Le_QueryAndGroup LabelStatIdx GroupingInt BallQuery FeatureGather QueryAndGroup LabelStatBallRange Grouping Le_QueryAndGroup_SameSize KNNQueryExclude FeatureDistribute KNNQueryNaive LabelStatAndBallQuery FurthestSampling Gathering pairwise_distances Gen_QueryAndGroupXYZ Le_QueryAndGroup_OnlyFeature _sum_ft SynchronizedBatchNorm2d _unsqueeze_ft _SynchronizedBatchNorm SynchronizedBatchNorm1d SynchronizedBatchNorm3d SyncMaster FutureResult SlavePipe execute_replication_callbacks CallbackContext DataParallelWithCallback patch_replication_callback TorchTestCase as_numpy load_cfg_from_cfg_file merge_cfg_from_list CfgNode _assert_with_logging _decode_cfg_value _check_and_coerce_cfg_value_type make_dataset PointData _ConvBase variable_size_collate BNMomentumScheduler Trainer save_checkpoint TrainValSplitter group_model_params _FeatureDropoutNoScaling BatchNorm3d Conv3d _BNBase BatchNorm1d FC checkpoint_state SharedMLP_1d Conv1d set_bn_momentum_default SharedMLP load_checkpoint _DropoutNoScaling Conv2d BatchNorm2d worker_init_fn ScanNet RandomShift RandomRotate ToTensor Compose RandomRotatePerturbation RandomJitter RandomScale pairwise_distance_mask1_dilate intersectionAndUnionGPU check_mkdir pairwise_distance convert_to_syncbn poly_learning_rate AverageMeter check_makedirs colorize intersectionAndUnion knn pairwise_distance_mask step_learning_rate init_weights pairwise_distance_mask1 ChamferLoss ChamferLoss grid lmax adjacency laplacian distance_sklearn_metrics rescale_L fourier distance_scipy_spatial conv1dbr conv2dbr fcbr fcdbr rotate_point_cloud_by_angle_with_normal rotate_point_cloud shuffle_points rotate_point_cloud_with_normal loadDataFile getDataFiles load_h5 rotate_point_cloud_z shuffle_data rotate_perturbation_point_cloud_with_normal rotate_point_cloud_by_angle rotate_perturbation_point_cloud jitter_point_cloud shift_point_cloud random_scale_point_cloud random_point_dropout mean sqrt sum max add_argument ArgumentParser makedirs join checkpoint_dir print check_folder exit model_dir seed join checkpoint_dir format extract_feature build_model print manualSeed exit randint system test model_dir manual_seed parse_args train network PDGN_v1 transpose mm view list hasattr __data_parallel_replicate__ modules enumerate len replicate data isinstance items list CfgNode deepcopy zip _decode_cfg_value setattr _check_and_coerce_cfg_value_type literal_eval append type conditional_cast debug join format print readlines strip append len append named_parameters DataParallel isinstance state_dict copyfile format save load get format print load_state_dict isfile seed transpose sum matmul transpose sum softmax matmul topk size transpose matmul scatter softmax sum cuda topk size transpose matmul scatter float sum cuda topk size transpose shuffle matmul scatter cuda float sum array param_groups max param_groups float reshape size histogram copy cpu histc view mkdir makedirs isinstance named_parameters bias normal_ weight modules xavier_normal_ LSTM kaiming_normal_ constant_ Linear named_modules eps affine num_features isinstance SynchronizedBatchNorm2d BatchNorm3d SynchronizedBatchNorm3d momentum BatchNorm1d BatchNorm2d SynchronizedBatchNorm1d recursive_set convert putpalette meshgrid reshape empty linspace sort pdist squareform sort pairwise_distances exp setdiag reshape multiply mean shape repeat coo_matrix toarray squeeze size identity sqrt sum diags toarray eigs sort eig eigh eigsh shape identity arange shuffle len arange shuffle reshape cos pi dot shape uniform sin zeros array range reshape cos pi dot shape uniform sin zeros array range reshape cos pi dot uniform sin array range randn reshape dot shape zeros range array clip reshape cos dot shape sin zeros array range reshape cos dot shape sin zeros array range randn reshape dot shape zeros range array clip shape clip randn shape uniform range shape uniform range random range File | ## Progressive Point Cloud Deconvolution Generation Network by Le Hui, Rui Xu, Jin Xie, Jianjun Qian, and Jian Yang, details are in [paper]( https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123600392.pdf). ### Usage 1. requires: ``` CUDA10.1 Pytorch 1.7.1 Python3.7 ``` 2. build ops: | 2,116 |
fra31/fab-attack | ['adversarial attack'] | ['Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack'] | FAB_l1.py FAB_l2.py FAB_linf.py settings.py utils.py FAB_linf_pt.py test_attack.py linear_approximation_search get_diff_logits_grads_batch FABattack_l1 projection_l1_hyperplane projection_l2_hyperplane get_diff_logits_grads_batch linear_approximation_search FABattack_l2 get_diff_logits_grads_batch FABattack_linf linear_approximation_search projection_linf_hyperplane fab_pt get_diff_logits_grads_batch compute_jacobian linear_approximation_search projection_linf init_layers init saver dense_layers get_weights_conv conv_layers Model load_dataset forward_pass squeeze expand_dims moveaxis array run cumsum log2 floor cuda max ones squeeze shape ceil sum cat LongTensor astype nonzero float type sort min zeros sum y arange abs reshape logical_and copy range clip run arange randn get_diff_logits_grads_batch where abs cuda clip run ones squeeze argmin shape final_search sum range corr_pred eps n_labels astype copy minimum time reshape maximum alpha_max linear_approximation_search zeros numpy cumsum log2 unsqueeze floor cuda max ones squeeze ceil sum cat LongTensor astype nonzero float type sort min zeros numpy sqrt arange randn get_diff_logits_grads_batch where abs cuda clip run ones squeeze argmin shape final_search sum range corr_pred eps n_labels astype copy sqrt minimum time reshape maximum alpha_max linear_approximation_search zeros numpy arange cumsum log2 unsqueeze floor abs cuda max FloatTensor ones squeeze shape ceil sum LongTensor astype nonzero float type min clone argsort zeros amax arange get_diff_logits_grads_batch where abs cuda clip run ones squeeze argmin shape uniform final_search sum range corr_pred n_labels astype copy minimum reshape maximum alpha_max linear_approximation_search zeros numpy amax Variable eval to numpy data backward zero_ zeros range cuda is_cuda zero_gradients arange cumsum log2 unsqueeze floor abs cuda max FloatTensor ones squeeze shape ceil sum LongTensor astype nonzero float type min clone argsort zeros eval numpy argmax arange get_diff_logits_grads_batch where abs cuda clip ones squeeze argmin from_numpy shape uniform sum format las astype copy eval overshooting minimum print reshape maximum alpha_max linear_approximation_search backward_beta zeros numpy array amax str im eps n_iter p model targetcl alpha_max n_restarts savemat final_search dataset relu ones reshape squeeze weights matmul bias max_pool conv2d range len squeeze astype float32 expand_dims loadmat | # FAB: a Fast Adaptive Boundary Attack This is the code relative to the method introduced in **Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack**\ Francesco Croce, Matthias Hein\ *University of Tübingen*\ [https://arxiv.org/pdf/1907.02044.pdf](https://arxiv.org/pdf/1907.02044.pdf) We propose a new white-box adversarial attack against neural networks-based classifiers. FAB-attack aims at changing the classification of a clean input applying a perturbation with minimal Lp-norm, for p in {1, 2, inf}. It achieves quickly good quality results, does not need the specification of a step size and tries to track the desicion boundary. | 2,117 |
fra31/mmr-universal | ['adversarial attack'] | ['Provable defenses against adversarial examples via the convex outer adversarial polytope'] | kolter_wong/convex_adversarial/dual.py mip/mip_verify.py kolter_wong/convex_adversarial/dual_layers.py kolter_wong/convex_adversarial/__init__.py kolter_wong/eval.py kolter_wong/models.py train.py kolter_wong/utils.py data.py kolter_wong/trainer.py mip/parse.py kolter_wong/convex_adversarial/dual_inputs.py utils.py regularizers.py kolter_wong/custom_layers.py robustml_mnist.py attacks.py eval.py kolter_wong/convex_adversarial/utils.py models.py kolter_wong/attacks.py kolter_wong/convex_adversarial/dual_network.py eval_pgd_attack eval_cw_attack MadryEtAl pgd_attack cw_attack zero_pad get_dataset dense_to_one_hot augment_train get_batch_iterator augment_test eval_in_batches forward_pass load_model export_weights CNN LeNet LeNetSmall MLP calc_v_fc zero_out_non_min_distances mmr_cnn calc_v_conv get_min_distances mmr_fc Model load_model adv_train eval_in_batches forward_pass_cleverhans save_results average_gradients save_combined_bounds get_n_total_hidden_units get_hidden_units avg_tensor_list Logger create_hps_str create_folders _fgs pgd mean _pgd fgs attack Flatten Conv2dUntiedBias eval_lb_db lenet_avgpool lenet_large fc lenet_small select_model restore_model resnet Flatten train_baseline evaluate_baseline sampler_robust_cascade AverageMeter evaluate_robust_cascade evaluate_robust evaluate_madry train_robust robust_loss_cascade train_madry argparser data_loader fashion_mnist_loaders args2kwargs cifar_loaders DualObject DualLayer InfBallProjBounded InfBall select_input InfBallBounded InfBallProj DualLinear conv_transpose2d Identity DualDense DualBatchNorm2d select_layer conv2d DualReshape DualConv2d unbatch DualReLU batch DualReLUProj robust_loss_parallel RobustBounds DualNetBounds robust_loss InputSequential robust_loss_with_point_errors DualNetwork get_epsilon epsilon_from_model DenseSequential GR Dense GL p_lower full_bias p_upper verify convert_mat_tf_to_mip parse_summary parse_bounds_pointwise summarize_processed_solve_status process_solve_status get_bounds_pointwise get_dt get_summary preprocess_summary_file mean sum run run stop_gradient CarliniWagnerL2 generate stop_gradient MadryEtAl generate reshape len loadmat format zeros arange unique len run range len dict savemat b W run load T list reshape transpose loadmat W flatten shape b numpy zip model_path expand_dims kw_model keys append reduce_max rand less_equal float32 cast get_min_distances tile expand_dims shape reshape reshape conv2d concat zero_out_non_min_distances calc_v_conv abs W transpose squeeze calc_v_fc reduce_sum cast append less expand_dims range matrix_diag tile zip minimum norm reshape float32 greater maximum eye len concat zero_out_non_min_distances abs transpose squeeze calc_v_fc reduce_sum cast append less expand_dims range matrix_diag tile norm reshape float32 greater maximum len int batch_size p pgd_attack pgd_n_iter ae_frac makedirs lmbd format gamma_db p gamma_linf nn_type gamma_l1 gamma_rb lmbd_linf lmbd_l1 dataset ae_frac format to_file dict savemat save dataset create_folders exp_name run mean format print savemat stack concat reduce_mean zip append expand_dims ceil data model backward Variable size zero_grad Adam sign sum data model Variable backward size clamp zero_grad Adam sign sum range format atk print squeeze mean parameters append robust_loss_batch enumerate data all view backward Variable size f astype numpy out_features append empty_cache to range is_cuda enumerate int lenet_avgpool lenet_large fc lenet_small isinstance Sequential out_channels Linear Conv2d normal_ sqrt modules zero_ ReLU append range len Conv2dUntiedBias isinstance Sequential out_channels Conv2d normal_ sqrt modules zero_ ReLU Flatten Linear isinstance Sequential out_channels Conv2d normal_ sqrt modules zero_ ReLU Flatten Linear isinstance Sequential out_channels Conv2d normal_ sqrt modules zero_ ReLU Flatten Linear isinstance block DenseSequential extend out_channels Conv2d normal_ sqrt modules zero_ range int list reshape transpose flatten parameters shape sqrt zip append to run clip_grad_norm_ zero_grad robust_loss squeeze update format size item flush enumerate time backward Variable print AverageMeter parameters empty_cache train step len model set_grad_enabled squeeze append sum robust_loss_parallel update format size astype eval item robust_loss_with_point_errors flush enumerate time Variable print AverageMeter empty_cache len update time format model Variable backward size AverageMeter zero_grad step print train sum enumerate flush len update time format model Variable print size AverageMeter eval sum enumerate flush len data model zero_grad sign Adam sum range update format size flush enumerate time backward Variable clamp print AverageMeter train step len update time format model Variable print size AverageMeter _pgd eval sum enumerate flush len data model size type_as robust_loss item append float sum enumerate format print Variable set_grad_enabled SubsetRandomSampler DataLoader cat robust_loss_cascade dataset cuda enumerate append len update time format Variable print squeeze AverageMeter set_grad_enabled size eval item robust_loss_cascade empty_cache enumerate flush len MNIST format FashionMNIST DataLoader TensorDataset permute CIFAR10 loadmat long MNIST DataLoader DataLoader Normalize CIFAR10 sorted format print add_argument prefix ArgumentParser vars parse_args epsilon cuda_ids format epsilon_from_model l1_eps print Variable delta l1_proj m cuda list detach isinstance zip print ReLU BatchNorm2d Linear append append size item size item DataParallel set_start unsqueeze ReLU cuda is_cuda l append bounds size select_layer reversed InputSequential zip item enumerate T isinstance Variable any out_features int isinstance numel Conv2d unsqueeze Linear get_epsilon isinstance print numel unsqueeze append sum l time max p_upper p_lower flatten zip format convert_mat_tf_to_mip replace print system savemat loadmat create_folders int format replace get_summary float format get_bounds_pointwise replace join apply DataFrame read_csv get_dataset sum len count summarize_processed_solve_status get_dt get_dt | # MMR-Universal This is the code relative to the paper <p align="left"><img src="https://raw.githubusercontent.com/fra31/mmr-universal/master/images/pl_gh_2.png" width="800"> **Francesco Croce, Matthias Hein** University of Tübingen [https://arxiv.org/abs/1905.11213](https://arxiv.org/abs/1905.11213) ## Main idea We introduce a regularization scheme which aims at expanding the linear regions of ReLU-networks in both L1- and Linf-sense. We show that in this way we are able to achieve *simultaneously provable robustness wrt all the Lp-norms for p>=1*. | 2,118 |
fractalego/pynsett | ['relation extraction'] | ['Pynsett: A programmable relation extractor'] | pynsett/examples/sentence_creation.py pynsett/inference/forward_inference.py pynsett/server/__init__.py pynsett/drt/drs_rule.py pynsett/writer/relation_triplets_writer.py pynsett/drt/drs_matcher.py pynsett/discourse/global_graph_visitors.py pynsett/examples/multiple_match_example.py pynsett/drt/__init__.py pynsett/discourse/__init__.py pynsett/auxiliary/__init__.py pynsett/auxiliary/prior_knowedge.py pynsett/auxiliary/tags.py pynsett/examples/extract_from_wikipedia.py pynsett/auxiliary/names_modifier.py pynsett/examples/extract_written_by.py pynsett/auxiliary/line_finder.py pynsett/auxiliary/random.py pynsett/server/server.py pynsett/examples/negation.py pynsett/drt/node_matcher.py pynsett/examples/discourse_example.py pynsett/discourse/anaphora.py pynsett/examples/extract_from_text.py docs/source/conf.py pynsett/knowledge/__init__.py pynsett/tests/test_sentences.py pynsett/auxiliary/collator.py pynsett/knowledge/drs_ner_cleaner.py pynsett/auxiliary/rule_utils.py pynsett/drt/drs.py pynsett/tests/test_api.py pynsett/auxiliary/transform.py pynsett/extractor/__init__.py pynsett/examples/start_server.py pynsett/metric/__init__.py pynsett/discourse/paragraphs.py pynsett/tests/test_simpleParagraphTokenizer.py pynsett/writer/drt_triplets_writer.py pynsett/discourse/single_tokens_visitors.py pynsett/inference/__init__.py pynsett/__main__.py pynsett/nl/__init__.py pynsett/install/__init__.py setup.py pynsett/tests/test_pynsett.py pynsett/writer/base_writer.py pynsett/writer/__init__.py pynsett/discourse/discourse.py Collator LineFinder assign_proper_index_to_nodes_names _needs_to_be_made_unique DiscourseNamesModifier SentenceNamesModifier get_wikidata_knowledge get_generic_knowledge get_random_name _restore_original_names _make_names_unique repeat_db_rules_n_times tag_is_determiner tag_is_only_noun tag_is_negation tag_is_verb tag_is_adjective tag_is_modal tag_is_noun tag_is_cardinal transform_triplets_into_api_edges_and_nodes SingleSentenceAnaphoraVisitor AllenCoreferenceVisitor InterSentenceAnaphoraVisitor AllenCoreferenceVisitorsFactory Discourse DiscourseBase Paragraph SentenceJoinerVisitor CoreferenceJoinerVisitor GraphJoinerVisitor BaseParagraphTokenizer SimpleParagraphTokenizer HeadTokenVisitor _create_graph_from_natural_language _get_union_of_graphs Drs DrsMatcher DrsRule VectorNodeMatcher Extractor clean_between eliminate_spaces ForwardInferenceChain BaseForwardInference ForwardInference UniqueNamesModifier find_weight_between CustomFileDownloader DrsNERCleaner GloveMetric MetricFactory MetricBase SpacyMetric simplify_tag SpacyParser create_word_nodes get_file get_resource root_dir get_drt get_triplets get_wikidata_page set_rules get_programmable_relations_page get_wikidata_triplets TestAPI PynsettUnitTests PynsettUnitTests TestSimpleParagraphTokenizer BaseWriter DRTTripletsWriter RelationTripletsWriter _needs_to_be_made_unique str read add_rules Knowledge read add_rules Knowledge vs es vs es deepcopy _restore_original_names query get_graph _make_names_unique range append list unique_everseen enumerate getLogger getLogger create_empty getLogger getLogger getLogger getLogger read execute Graph repeat_db_rules_n_times Parser GraphDatabase Graph GraphDatabase query index len index len replace UniqueNamesModifier dirname tag_is_adjective tag_is_verb tag_is_noun Collator set data extract Discourse Extractor loads data extract Discourse Extractor loads data loads add_rules Knowledge data Discourse DRTTripletsWriter apply loads join root_dir get join get_file root_dir get_file join root_dir get_file join root_dir getLogger getLogger | Pynsett: A programmable relation extraction tool =============================================== Installation ------------ Before installing the package you need to install the tools for compiling python-igraph ```bash sudo apt-get install build-essential python-dev python3-dev ``` The basic version can be installed by typing ```bash | 2,119 |
francesclluis/source-separation-wavenet | ['music source separation'] | ['End-to-end music source separation: is it possible in the waveform domain?'] | util.py models.py layers.py datasets.py main.py separate.py MultiInstrumentMUSDB18Dataset SingingVoiceMUSDB18Dataset Subtract Slice AddSingletonDepth Add set_system_settings training get_dataset get_command_line_arguments main inference get_valid_output_folder_path load_config SingingVoiceSeparationWavenet MultiInstrumentSeparationWavenet separate_sample load_wav compute_receptive_field_length one_hot_encode keras_float_to_uint8 linear_to_ulaw keras_ulaw_to_linear get_subdict_from_dict l1_l2_loss float_to_uint8 write_wav keras_linear_to_ulaw uint8_to_float binary_encode get_condition_input_encode_func get_indices_subsequence normalize pretty_json_dump ensure_keys_in_dict preemphasis ulaw_to_linear read_wav get_sequence_with_singing_indices contains_voice ensure_sample_rate keras_uint8_to_float one_hot_decode wav_to_float dir_contains_files setrecursionlimit setLevel INFO parse_args set_defaults add_option OptionParser open fit_model get_random_batch_generator get_dataset SingingVoiceSeparationWavenet MultiInstrumentSeparationWavenet join mkdir SingingVoiceSeparationWavenet int join load_wav bool one_shot batch_size print endswith separate_sample mixture_input_path target_field_length MultiInstrumentSeparationWavenet get_valid_output_folder_path config set_system_settings training get_command_line_arguments inference load_config int target_field_length input_length list join write_wav tolist tqdm receptive_field_length separate_batch ceil zeros array range min max astype max astype abs log sign abs log sign min max astype float abs sign abs sign array isinstance array isinstance astype all print dump dumps open read wav_to_float read_wav ensure_sample_rate array abs max asarray arange insert squeeze where mean diff append zeros abs max range len ceil randint len asarray mean append abs max range len listdir | A Wavenet for Music Source Separation ==== A neural network for end-to-end music source separation, as described in [End-to-end music source separation: is it possible in the waveform domain?](https://arxiv.org/abs/1810.12187) Listen to separated samples [here](http://jordipons.me/apps/end-to-end-music-source-separation/) What is a Wavenet for Music Source Separation? ----- The Wavenet for Music Source Separation is a fully convolutional neural network that directly operates on the raw audio waveform. It is an adaptation of [Wavenet](https://deepmind.com/blog/wavenet-generative-model-raw-audio/) that turns the original causal model (that is generative and slow), into a non-causal model (that is discriminative and parallelizable). This idea was originally proposed by [Rethage et al.](https://arxiv.org/abs/1706.07162) for speech denoising and now it is adapted for monaural music source separation. Their [code](https://github.com/drethage/speech-denoising-wavenet) is reused. The main difference between the original Wavenet and the non-causal adaptation used, is that some samples from the future can be used to predict the present one. As a result of removing the autoregressive causal nature of the original Wavenet, this fully convolutional model is now able to predict a target field instead of one sample at a time – due to this parallelization, it is possible to run the model in real-time on a GPU. | 2,120 |
francescosecci/Python_Image_Failures | ['autonomous driving'] | ['RGB cameras failures and their effects in autonomous driving applications'] | nodemos.py chromaticaberration.py nobayesfilter-greyscale.py overlay_images2.py nobayesfilter-greyscale2.py overlay_images.py brightness.py deadpixel200-fixedposition.py deadpixel-fixedposition.py deadpixel50-fixedposition.py black.py noise.py white.py blurred.py sharpness.py add_jitter blend_images polar_to_cartesian cartesian_to_polar get_gauss vertical_gaussian add_chromatic rgb2gray ceil zeros round range ceil zeros round range int list sum range sum multiply transpose zeros round range fromarray uint8 asarray ANTIALIAS size polar_to_cartesian merge cartesian_to_polar vertical_gaussian resize round split size merge split int ANTIALIAS new convert size alpha_composite resize crop putalpha | # Python Image Failures This repository contains codes that are used to simulate failures that can occur in a camera during the acquisition/processing phase. The codes were found and modified, so as to be optimal for the work that had to be done. ### How to cite our work: Francesco Secci, Andrea Ceccarelli. "On failures of RGB cameras and their effects in autonomous driving applications." In: The 31st International Symposium on Software Reliability Engineering (ISSRE) 2020. Andrea Ceccarelli, Francesco Secci: "RGB cameras failures and their effects in autonomous driving applications", In press at IEEE Transactions on Dependable and Secure Computing. ## How is this repo organized There are various python files, and some images. In general: - sim1.jpg is the target image, on which failures are applied | 2,121 |
francesita/CS-Embed-SemEval2020 | ['sentiment analysis', 'word embeddings', 'multilingual word embeddings'] | ['CS-Embed at SemEval-2020 Task 9: The effectiveness of code-switched word embeddings for sentiment analysis'] | tweet_collect.py cs_model.py | # CS-Embed at SemEval-2020 Task 9: The effectiveness of code-switched word embeddings for sentiment analysis Code and specs for CS-Embed's contribution to SemEval-2020 Task 9. * tweet_ids.zip : contains the tweet-id's of the tweets used to create the code-switched embeddings * tweet_collect.py: code used to collect tweets from twitter using Tweepy and keyword list * cs_model.py: code used to train bilstm model * cs_embeddings.tar.gz: word2vec code-switched embeddings with dimension 100. These are the main contribution for SemEval2020: Task 9 ### Code-Switch BiLSTM Model Summary _________________________________________________________________ |Layer (type)|Output Shape|Param No.| |-----------------------|-------------------------|--------------| | 2,122 |
frankaging/LAT_for_Transformer | ['sentiment analysis', 'time series'] | ['Structured Self-Attention Weights Encode Semantics in Sentiment Analysis', 'Structured Self-AttentionWeights Encode Semantics in Sentiment Analysis'] | code/model/attention_util.py code/model/make_masks.py code/model/dictionary_analyze.py code/model/attention_viz.py code/model/word_cloud.py code/model/t/Models.py code/model/t/Modules.py code/model/models.py code/model/train.py code/model/t/Layers.py code/model/attn_analyze.py code/model/t/SubLayers.py code/model/sentence_highlight.py code/model/datasets.py head_attn_viz_func head_heatmap_viz_func trace_attn_viz_func head_heatmap_viz_func_individual save_params generate_class ratingInputHelper extract_attn_weight save_checkpoint padRating oneHotVector padInputHelper textInputHelper eval_ccc evaluateOnEval normalize_lap sortSST generateBatchSST plot_predictions save_predictions load_token_dict_sst generateInputChunkHelper softmax getSeqList plot_eval generateTrainBatch videoInputHelper constructInput padFeaturesSST load_checkpoint calculate_accuracy get_attn_weight chunks padInput load_data normalize_w stringOut save_params SEND count_parameters generate_class ratingInputHelper save_checkpoint padRating oneHotVector SST padInputHelper textInputHelper eval_ccc evaluateOnEval normalize_lap sortSST generateBatchSST plot_predictions save_predictions generateInputChunkHelper softmax getSeqList main plot_eval generateTrainBatch videoInputHelper constructInput padFeaturesSST load_checkpoint calculate_accuracy chunks padInput load_data normalize_w stringOut pad_and_merge len_to_mask load_dataset MultiseqDataset seq_collate seq_collate_dict SEND sanity_check_lemma find_lemma_map main ALL SST generate_token_mask xstransformer_gs get_attention_mask convolve TransformerLSTMAttn get_subsequent_mask pad_shift get_pad_mask TransformerLinearAttn clean_word SEND rescale generate SST save_params SEND generate_class ratingInputHelper save_checkpoint padRating oneHotVector SST padInputHelper eval_ccc evaluateOnEval sortSST generateTrainBatchRandom generateBatchSST plot_predictions save_predictions generateInputChunkHelper getSeqList main plot_eval generateTrainBatch videoInputHelper constructInput evaluate padFeaturesSST load_checkpoint calculate_accuracy chunks padInput load_data train plot_unsign EncoderLayer PositionalEncoding attendedEncoder ScaledDotProductAttention MultiHeadAttention PositionwiseFeedForward add_subplot tick_top xticks tick_params yticks show str list ylabel savefig set_color range plot close set_label_position annotate keys xlabel dict set_facecolor figure add_subplot tick_top xticks tick_params max yticks show str list ylabel matmul ylim savefig set_color append sum range plot close reversed stack set_label_position softmax annotate keys print clamp xlabel clone dict set_facecolor figure set_major_formatter tick_params tick_top heatmap yticks set_aspect show list FloatTensor matmul add colorbar savefig set_color random_percentage append range format set_xticklabels close reversed set upper set_label_position annotate keys FuncFormatter print xlabel clone figure zeros array len set_yticklabels unsqueeze set_major_formatter tick_params tick_top heatmap show list FloatTensor squeeze ylabel matmul add colorbar StrMethodFormatter savefig set_color sum range set_xticklabels close reversed set set_label_position softmax annotate keys print xlabel figure zeros len mean var range len list sort zip append tensor list sort shuffle generateInputChunkHelper keys zeros max range len str list print eval_ccc tolist generateTrainBatch eval numpy device append to forward keys backward_nlap len format set_title plot concatenate pause set_xlim draw tight_layout cla savefig zip enumerate set_ylim len show set_title plot set_xlabel min add_subplot subplots_adjust set_ylabel figure legend append range len join format to_csv seq_ids zip DataFrame get h_dim set_index insert to_csv attn_len embed_dim float DataFrame save load print load_dataset append int range int copy isnan append array range append sum len videoInputHelper textInputHelper ratingInputHelper append len append max len padInputHelper append append max exp array dict list keys append max list sort zip append tensor padFeaturesSST sort sortSST zeros max range len to range device max range append max range generate_class device open seed data_dir tolist load_state_dict range dump format eval info model_path manual_seed is_available TransformerLinearAttn BCELoss load print load_checkpoint dict empty_cache len load join int list dump data_dir dict out_dir append float keys open backward_tf_attn squeeze ones_like backward Variable modalities out_dir padRating device max open seed str list sorted data_dir evaluateOnEval tolist Adam MSELoss load_state_dict append range dump format getSeqList manual_seed model_path is_available info zip deepcopy TransformerLSTMAttn constructInput print load_checkpoint dict parameters padInput load_data seq_ids empty_cache len model generate_class zero_grad device backward_nlap open seed list data_dir tolist load_state_dict append to sum generateBatchSST range ones_like format dump eval softmax manual_seed model_path is_available TransformerLinearAttn BCELoss keys info load backward print load_checkpoint Variable calculate_accuracy extend dict out_dir empty_cache stringOut len SST SEND len max expand len from_numpy zeros float max enumerate list sort maximum pad_and_merge len_to_mask zip append zeros max len sort len_to_mask pad_and_merge max list print lemmatize WordNetLemmatizer dict get_word_forms keys str list print strip len dict eval startswith input keys range append split set set eval input keys print set dict eval input keys ALL bool squeeze stack append to range bmm squeeze stack eye range to max range len to device stack size bool asarray max min append replace sample generate load generate list LongTensor sort shuffle generateInputChunkHelper keys randperm append zeros max range cat len list format criterion model backward info zero_grad generateTrainBatchRandom device is_available empty_cache to step keys random_train generateTrainBatch model list format info model eval_ccc is_available eval device append empty_cache to numpy keys generateTrainBatch model_dir save_checkpoint join train epochs shuffle modalities model_dir save_checkpoint oneHotVector Adam epochs join criterion parameters step generate_from_frequencies axis tight_layout dict imshow savefig figure array open | # Structured Self-Attention Weights Encode Semantics in Sentiment Analysis Code base for the paper accepted to the BlackboxNLP at EMNLP2020. ## Description In this paper, we show that self-attention scores encodes simple semantics by considering sentiment analysis tasks. In contrast to gradient-based feature attribution methods that leverage gradients, we propose a simple yet effective Layer-wise Attention Tracing (LAT) method to analyze structured attention weights which in turn yields semantically meaningful explanations. <img src="https://i.ibb.co/WcXBX81/lat-v1-4.png" width="300"> - Attention tracing diagram through self-attention layers in the Transformer model. ## Models and Datasets ### Self-attention Encoder + LSTM Decoder This model is for one of experiment running on Stanford Emotional Narratives Dataset (SEND). | 2,123 |
frankaging/Quasi-Attention-ABSA | ['sentiment analysis', 'aspect based sentiment analysis'] | ['Context-Guided BERT for Targeted Aspect-Based Sentiment Analysis'] | code/analyze.py code/convert_tf_checkpoint_to_pytorch.py code/util/processor.py code/util/tokenization.py code/util/train_helper.py code/run_classifier.py code/util/optimization.py code/model/QACGBERT.py code/util/lrp.py code/model/CGBERT.py code/util/evaluation.py code/model/BERT.py code/util/args_parser.py GradOnly getModelOptimizerTokenizer InputFeatures GradDiff MockTrain _truncate_seq_pair convert_examples_to_features BaseEval Train Metrics BaseTrain router convert run BERTPooler BertForSequenceClassification BERTIntermediate BERTOutput BERTSelfOutput gelu BERTLayerNorm BERTSelfAttention BertConfig BERTLayer BERTAttention BertForQuestionAnswering BERTEncoder BERTEmbeddings BertModel ContextBERTEncoder BERTIntermediate BERTOutput BERTSelfOutput ContextBERTAttention CGBertForSequenceClassification gelu BERTLayerNorm mask ContextBERTPooler ContextBERTLayer BertConfig ContextBERTSelfAttention BERTEmbeddings ContextBertModel BERTIntermediate ContextBERTAttention get_inputivation ContextBertModel mask QACGBertForSequenceClassification init_hooks_lrp BERTSelfOutput gelu BERTLayerNorm ContextBERTLayer ContextBERTPooler BertConfig get_activation BERTEmbeddings get_activation_multi ContextBERTEncoder BERTOutput ContextBERTSelfAttention semeval_PRF sentihood_strict_acc sentihood_macro_F1 sentihood_AUC_Acc semeval_Acc main l_lap_grad a_lap_vectorize BERTAdam warmup_cosine warmup_constant warmup_linear Sentihood_NLI_M_Processor Semeval_NLI_M_Processor DataProcessor InputExample WordLevelTokenizer FullTokenizer BasicTokenizer WordpieceTokenizer printable_text convert_tokens_to_ids load_vocab whitespace_tokenize convert_to_unicode _is_whitespace _is_control _is_punctuation evaluate_fast system_setups getModelOptimizerTokenizer evaluate InputFeatures make_weights_for_balanced_classes _truncate_seq_pair data_and_model_loader convert_examples_to_features step_train text_b InputFeatures convert_tokens_to_ids len tqdm _truncate_seq_pair tokenize append text_a enumerate pop len load FullTokenizer vocab ContextAwareBertForSequenceClassification BertForSequenceClassification WordLevelTokenizer BiLSTM BERTAdam len Adam parameters BertConfig load_state_dict info BertSimpleForSequenceClassification from_json_file HeadwiseContextAwareBertForSequenceClassification open gradient_accumulation_steps model to backward zero_grad tqdm mean is_available empty_cache train step enumerate eval eval_test save train norm model to zero_grad square shape softmax is_available empty_cache train zeros append str list semeval_PRF sentihood_strict_acc sentihood_macro_F1 get_y_true sentihood_AUC_Acc OrderedDict get_y_pred semeval_Acc info task_name keys range CosineSimilarity len get_train_examples DataLoader DataParallel DistributedDataParallel output_dir device Metrics tensor from_json_file context_standalone seed getModelOptimizerTokenizer max_seq_length data_dir len get_labels max_context_length bert_config_file device_count TensorDataset convert_examples_to_features BaseEval to manual_seed_all init_process_group head_sp_loss num_train_epochs info manual_seed int join print accumulate_gradients get_test_examples bool local_rank train_batch_size makedirs Train save from_json_file tf_checkpoint_path transpose bert_config_file shape from_numpy getattr list_variables append state_dict pytorch_dump_path format zip BertModel load_variable join int print fullmatch any split int evaluate_fast str system_setups data_and_model_loader step_train info num_train_epochs trange evaluate_interval zeros max range len register_forward_hook get_inputivation get_activation int range len add set intersection range len mean append accuracy_score array range roc_auc_score len add set intersection range len range len pred_data_dir list semeval_PRF print sentihood_strict_acc add_argument get_y_true sentihood_macro_F1 sentihood_AUC_Acc OrderedDict get_y_pred semeval_Acc ArgumentParser parse_args task_name keys crop_function sum contiguous where stack unsqueeze get_device tensor to sum is_cuda isinstance PY3 PY2 isinstance PY3 PY2 OrderedDict append strip split category category startswith startswith category ord len float sum range enumerate list items CGBertForSequenceClassification QACGBertForSequenceClassification OrderedDict startswith to_json_string seed int manual_seed_all join manual_seed init_process_group print device device_count accumulate_gradients from_json_file bert_config_file output_dir info bool local_rank train_batch_size makedirs get_train_examples DataLoader DataParallel DistributedDataParallel tensor context_standalone getModelOptimizerTokenizer max_seq_length data_dir DistributedSampler get_labels max_context_length TensorDataset convert_examples_to_features to make_weights_for_balanced_classes num_train_epochs info int WeightedRandomSampler RandomSampler get_test_examples train_batch_size len str list semeval_PRF concatenate sentihood_strict_acc sentihood_macro_F1 tqdm OrderedDict sentihood_AUC_Acc eval semeval_Acc info keys str semeval_PRF concatenate sentihood_strict_acc sentihood_macro_F1 tqdm OrderedDict sentihood_AUC_Acc eval semeval_Acc output_dir info save state_dict gradient_accumulation_steps evaluate model to backward zero_grad tqdm mean set_postfix info is_available empty_cache train step enumerate | # Quasi-Attention-ABSA Codebase for Context-Guided BERT for Targeted Aspect-Based Sentiment Analysis (AAAI2021) ## Contents * [Citation](#Citation) * [Quick start](#quick-start) * [License](#license) ## Citation ``` @inproceedings{wu2020context, title={Context-Guided BERT for Targeted Aspect-Based Sentiment Analysis}, | 2,124 |
frankkramer-lab/covid19.MIScnn | ['data augmentation', 'semantic segmentation'] | ['Automated Chest CT Image Segmentation of COVID-19 Lung Infection based on 3D U-Net'] | scripts/stepwise_performance/run_miscnn.noDA_noPreProc.py scripts/stepwise_performance/run_miscnn.noPreProc.py scripts/run_evaluation.py scripts/stepwise_performance/run_miscnn.noDA.py scripts/test/data_exploration.py scripts/data_exploration.py scripts/cv_analysis/miscnn_k2.py scripts/cv_analysis/miscnn_k3.py scripts/utils/identify_resamplingShape.py scripts/run_miscnn.py scripts/cv_analysis/evaluate.py scripts/run_preprocessing.py scripts/stepwise_performance/stepwise_fitting_evaluation.py scripts/test/evaluate.py scripts/test/predict.py scripts/cv_analysis/pp.py scripts/cv_analysis/miscnn_k4.py scripts/download_data.py download_from_url calc_Precision calc_Sensitivity calc_Accuracy visualize_evaluation overlay_segmentation calc_Specificity plot_fitting calc_IoU calc_DSC calc_Precision calc_Sensitivity calc_Accuracy visualize_evaluation overlay_segmentation calc_Specificity plot_fitting calc_IoU calc_DSC calc_Precision calc_Sensitivity calc_Accuracy calc_Specificity calc_IoU calc_DSC get int print close tqdm getsize exists join subplots set_title squeeze close zfill shape imshow mkdir save overlay_segmentation zeros FuncAnimation uint8 min astype greater where shape stack zeros max clip append sum equal range append sum equal range append sum equal range equal logical_not append sum range size logical_not range append sum equal append sum equal range ylab scale_colour_discrete xlab theme_bw melt ggplot ggtitle scale_y_continuous save geom_smooth aes | # Robust Chest CT Image Segmentation of COVID-19 Lung Infection based on limited data [](https://doi.org/10.5281/zenodo.3902293) In this paper, we proposed and evaluated an approach for automated segmentation of COVID-19 infected regions in CT volumes. Our method focused on on-the-fly generation of unique and random image patches for training by exploiting heavy preprocessing and extensive data augmentation. Thus, it is possible to handle limited dataset sizes which act as variant database. Instead of new and complex neural network architectures, we utilized the standard 3D U-Net. We proved that our medical image segmentation pipeline is able to successfully train accurate as well as robust models without overfitting on limited data. Furthermore, we were able to outperform current state-of-the-art semantic segmentation approaches for lungs and COVID-19 infection. Our work has great potential to be applied as a clinical decision support system for COVID-19 quantitative assessment and disease monitoring in the clinical environment. Nevertheless, further research is needed on COVID-19 semantic segmentation in clinical studies for evaluating clinical performance and robustness. The models, predictions, visualizations and evaluation (scores, figures) are available under the following link: https://doi.org/10.5281/zenodo.3902293 **This work does NOT claim clinical performance in any means and underlie purely educational purposes.**  ## Reproducibility **Requirements:** - Ubuntu 18.04 | 2,125 |
fred2008/TCMSA | ['sentiment analysis'] | ['Tree Communication Models for Sentiment Analysis'] | test_new.py data.py util.py tree_lstm.py tools/tensorflow_rename_variables.py config.py graph_encoder_utils.py train_new.py opt.py model.py get_config get_configs update_config Dataset download_and_unzip Data_util GraphEncoder highway_layer positional_encoding multi_highway_layer TreeCommunication Optimization dev_eval dev_eval TreeLSTM BinaryTreeLSTMCell Embedding_drop load_namespace save_namespace Bunch metric2str Timer main rename join list shape reshape highway_layer format range get_batches items list gen_feed_dict stack append sum enumerate run vars get_checkpoint_state print rename getopt exit | # Tree Communication Models for Sentiment Analysis (TCMSA)
Code of Tree communication model (TCM) for sentiment analysis.
# Requirement
* python3
* [tensorflow](https://tensorflow.google.cn/)
* [fold](https://github.com/tensorflow/fold)
# Get data
| 2,126 |
frederick0329/Wikipedia_title_dataset | ['text classification'] | ['Learning Character-level Compositionality with Visual Features'] | crawl.py raw2format.py WikiCrawler.py raw2format crawl readCategoryFile to_utf8 print close compile open isinstance read replace quote add urlopen loads to_utf8 open | # Wikipedia_title_dataset This repo consists of the data used for acl 2017 [Learning Character-level Compositionality with Visual Features](https://arxiv.org/abs/1704.04859) This is an implementation for crawling the title and its corresponding categories of an Wikipedia page. The crawler will crawl the categories according to category_list_lang.txt The dataset is already crawled in the folder acl2017_data with the following command: ``` python crawl.py -l zh -n 100000 python crawl.py -l ja -n 100000 | 2,127 |
fredzzhang/hicodet | ['human object interaction detection'] | ['Spatially Conditioned Graphs for Detecting Human-Object Interactions'] | utilities/generate_html_page.py detections/eval_detections.py detections/preprocessing.py detections/train_faster_rcnn.py detections/main_detr.py detections/generate_gt_detections.py utilities/navigator.py utilities/visualise_and_cache.py hicodet.py detections/visualise.py HICODetSubset HICODet main compute_map main Engine initialise sanity_check HICODetObject collate_fn main main create_aspect_ratio_groups _repeat_to_at_least DetectorEngine collate_fn main GroupedBatchSampler HICODetObject _quantize visualize name_parser visualise parse_commands list_node move zeros_like view squeeze tolist _idx append to_tensor cat BoxAssociation replace DetectionAPMeter eval unique float batched_nms join print associate argsort tqdm zeros nms_thresh compute_map max_human HICODet object_thresh human_thresh max_object detection_root join format _anno partition cache_dir reshape len tqdm data_root enumerate makedirs list build_model print Compose dict pretrained HICODet resume load_state_dict HICODetObject exists values Linear append batch_size DataLoader lr_drop BatchSampler seed StepLR set_device DistributedSampler get_rank SequentialSampler init_process_group engine initialise eval manual_seed update_state_key AdamW print Engine epochs show items list criterion model print zip squeeze draw_boxes initialise copy weight_dict item sum values pop sort tolist fasterrcnn_resnet_fpn load_state_dict ckpt_path append cuda exists ceil from_iterable repeat len list deepcopy sorted print _quantize format random_seed create_aspect_ratio_groups DetectorEngine RandomSampler GroupedBatchSampler HICODetObject num_epochs DatasetConcat nms_thresh image_idx open box_score_thresh nms show partition tolist from_numpy filename range format asarray replace join Draw print text HICODet rectangle data_root detection_root split split join isdigit print ls ceil range len show ellipse line Draw tolist rectangle zip pop int format print up visualise down len | # HICO-DET Utilities for the human-object interaction detection dataset [HICO-DET](http://www-personal.umich.edu/~ywchao/hico/) ## Supported Utilities - [x] __NEW!__ [Train and test DETR on HICO-DET](https://github.com/fredzzhang/hicodet/tree/main/detections#train-and-test-detr-on-hico-det) - [x] [A command-line style dataset navigator](https://github.com/fredzzhang/hicodet/tree/main/utilities#dataset-navigator) - [x] [Large-scale visualisation in web page](https://github.com/fredzzhang/hicodet/tree/main/utilities#generate-and-visaulise-box-pairs-in-large-scales) - [x] [Generate object detections with Faster R-CNN](https://github.com/fredzzhang/hicodet/tree/main/detections#generate-detections-using-faster-r-cnn) - [x] [Generate ground truth object detections](https://github.com/fredzzhang/hicodet/tree/main/detections#generate-ground-truth-detections) - [x] [Visualise detected objects](https://github.com/fredzzhang/hicodet/tree/main/detections#visualise-detections) - [x] [Evaluate object detections](https://github.com/fredzzhang/hicodet/tree/main/detections#evaluate-detections) | 2,128 |
freedombenLiu/FastPhotoStyle | ['image stylization'] | ['A Closed-form Solution to Photorealistic Image Stylization'] | demo.py process_stylization_folder.py process_stylization_ade20k_ssn.py photo_smooth.py converter.py models.py download_models.py photo_gif.py demo_with_ade20k_ssn.py smooth_filter.py photo_wct.py process_stylization.py photo_wct_loader weight_assign segment_this_img download_file_from_google_drive save_response_content get_confirm_token VGGDecoder VGGEncoder GIFSmoothing Propagator PhotoWCT ReMapping stylization memory_limit_image_resize Timer overlay stylization visualize_result SegReMapping smooth_local_affine smooth_filter Parameter items float list load load_state_dict padding_constant float transpose min astype float32 copy imgSize from_numpy shape dict unsqueeze round2nearest_multiple resize transform imread append get get_confirm_token save_response_content Session items list startswith height print thumbnail BICUBIC width Canny uint8 ones dilate range zeros astype unique load _best_local_affine_kernel bytes Module namedtuple Stream numpy Program _reconstruction_best_kernel encode _bilateral_smooth_kernel cuda compile get_function fromarray uint8 transpose convert ascontiguousarray shape resize smooth_local_affine array clip | [](https://raw.githubusercontent.com/NVIDIA/FastPhotoStyle/master/LICENSE.md)   ## FastPhotoStyle ### License Copyright (C) 2018 NVIDIA Corporation. All rights reserved. Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode). <img src="https://raw.githubusercontent.com/NVIDIA/FastPhotoStyle/master/teaser.png" width="800" title="Teaser results"> ### What's new | 2,129 |
freedombenLiu/ad_examples | ['active learning', 'anomaly detection'] | ['GLAD: GLocalized Anomaly Detection via Human-in-the-Loop Learning', 'Active Anomaly Detection via Ensembles: Insights, Algorithms, and Interpretability', 'Incorporating Feedback into Tree-based Anomaly Detection', 'Active Anomaly Detection via Ensembles'] | ad_examples/dnn/gan.py ad_examples/aad/plot_class_diversity.py ad_examples/timeseries/timeseries_customRNN.py ad_examples/timeseries/activity_word2vec.py ad_examples/ad/spectral_outlier.py ad_examples/aad/test_hyperplane_angles.py ad_examples/graph/gcn_test_support.py ad_examples/classifier/perceptron.py ad_examples/common/timeseries_datasets.py ad_examples/dnn/gan_test_support.py ad_examples/aad/data_stream.py ad_examples/glad/glad_support.py ad_examples/aad/forest_aad_support.py ad_examples/aad/preprocess_electricity_dataset.py ad_examples/common/nn_utils.py ad_examples/timeseries/timeseries_shingles.py ad_examples/loda/loda.py ad_examples/glad/glad_batch.py ad_examples/aad/anomaly_vs_classifier.py ad_examples/aad/simple_aad.py ad_examples/aad/demo_aad.py ad_examples/classifier/svm.py ad_examples/aad/test_concept_drift_classifier.py ad_examples/glad/plot_glad_results.py ad_examples/timeseries/timeseries_regression.py ad_examples/dnn/iso_gan_test_support.py setup.py ad_examples/ad/kde_outlier.py ad_examples/glad/glad_test_support.py datasets/KaggleFirstInternationalCompetitionOfTimeSeriesForecasting/sample_prediction.py ad_examples/aad/multiview_forest.py ad_examples/common/metrics.py ad_examples/common/expressions.py ad_examples/aad/plot_aad_results.py ad_examples/aad/loda_support.py ad_examples/aad/aad_base.py ad_examples/aad/forest_aad_detector.py ad_examples/timeseries/timeseries_arima.py ad_examples/common/test_sgd_optimization.py ad_examples/dnn/ad_autoencoder.py ad_examples/glad/glad_vs_aad.py ad_examples/aad/aad_globals.py ad_examples/aad/anomaly_dataset_support.py ad_examples/classifier/test_svm.py ad_examples/ad/pseudo_anom_outlier.py ad_examples/aad/aad_batch.py ad_examples/dnn/test_iso_gan.py ad_examples/aad/query_model_euclidean.py ad_examples/common/data_plotter.py ad_examples/ad/gmm_outlier.py ad_examples/timeseries/timeseries_rnn.py ad_examples/aad/query_model_other.py ad_examples/aad/random_split_trees.py ad_examples/aad/aad_test_support.py ad_examples/ad/outlier_effect.py ad_examples/common/expressions_tutorial.py ad_examples/aad/analyze_rules.py ad_examples/aad/aad_support.py ad_examples/timeseries/word2vec_custom.py ad_examples/aad/plot_anomalies_rectangle.py ad_examples/dnn/autoencoder.py ad_examples/aad/test_tree_properties.py ad_examples/common/gen_samples.py ad_examples/timeseries/activity_model.py ad_examples/aad/aad_ruleset_support.py ad_examples/aad/test_tree_detectors.py ad_examples/dnn/test_gan.py ad_examples/aad/classifier_trees.py ad_examples/aad/test_data_gen.py ad_examples/common/utils.py ad_examples/timeseries/simulate_timeseries.py ad_examples/aad/preprocess_weather_dataset.py ad_examples/common/sgd_optimization.py ad_examples/graph/simple_gcn.py ad_examples/aad/loda_aad.py ad_examples/aad/forest_description.py ad_examples/bayesian_ruleset/bayesian_ruleset.py ad_examples/aad/query_model.py ad_examples/loda/test_loda.py ad_examples/glad/afss.py ad_examples/aad/aad_loss.py ad_examples/aad/precomputed_aad.py ad_examples/percept/percept.py ad_examples/dnn/iso_gan.py ad_examples/dnn/dnn_classifier.py ad_examples/aad/test_concept_drift.py ad_examples/aad/test_hard_data.py ad_examples/timeseries/word2vec.py ad_examples/glad/test_glad.py ad_examples/ad/ad_outlier.py ad_examples/graph/test_gcn.py ad_examples/timeseries/casas.py ad_examples/ad/pca_reconstruct.py ad_examples/aad/aad_stream.py ad_examples/aad/test_rulesets.py package_files estimate_qtau Aad Budget MetricsStructure get_aad_metrics_structure get_budget_topK AadEventListener Ensemble aad_batch AadListenerForRules SampleData load_all_samples get_aad_command_args load_samples get_first_vals_not_marked get_aad_option_list AadOpts get_anomalies_at_top get_first_val_not_marked aad_loss_gradient_linear aad_loss_linear get_rulesets prepare_conjunctive_rulesets aad_stream prepare_stream_anomaly_detector prepare_aad_model StreamingAnomalyDetector train_aad_model load_aad_model load_aad_metrics save_aad_model summarize_aad_metrics get_linear_score_variance get_score_ranges save_aad_summary SequentialResults get_closest_indexes get_score_variances write_baseline_query_indexes write_sparsemat_to_file save_aad_metrics get_aad_model write_sequential_results_to_csv get_queried_indexes summarize_ensemble_num_seen check_random_vector_angle plot_score_contours plot_queries plot_dataset_2D plot_qval_hist plot_top_regions plot_selected_regions debug_qvals evaluate_forest_original plot_query_diversity plot_aad_2D plot_forest_contours_2D plot_tsne_queries plot_model_baseline_contours_2D plot_anomalous_2D test_ilp aad_unit_tests_battery swap_metadata load_summary plot_blank_image found_precomputed_summaries plot_rule_lengths aggregate_rules_data analyze_rules print_readable_rules summarize_values plot_scores write_all_summaries load_all_rule_data load_rules plot_num_rules analyze_rules_dataset accumulate_values string_agg_scores get_result_defs ResultDefs train_classifier plot_dataset train_anomaly_detector get_debug_args plot_regions get_auc plot_decision_tree_descriptions DecisionTreeAadWrapper RandomForestAadWrapper ClassifierForest IdServer StreamingSupport DataStream get_rearranging_indexes describe_instances detect_anomalies_and_describe get_debug_args RegionData AadForest is_forest_detector transform_features is_in_region BayesianRulesetsDescriber get_instances_for_description get_region_memberships CompactDescriber get_most_anomalous_subspace_indexes get_regions_for_description MinimumVolumeCoverDescriber get_compact_regions InstancesDescriber get_region_indexes_for_instances get_region_volumes AadLoda HistogramPDFs ModelManager get_avg_auc_for_samples get_hpdfs_for_samples PyDataModelManager get_avg_precs_for_samples CsvModelManager IForestMultiview IForestMultiviewTree process_results plot_results plot_diversity_all get_n_intermediate get_result_names get_x_tau plot_anomalies_ifor plot_anomalies_rect process_results get_num_discovered_classes_per_batch iter_by_window plot_results get_num_discovered_classes plot_class_discovery get_result_names test_precomputed_scores AadPrecomputed get_start_row_in_arff Query QueryTop QueryTopRandom QueryRandom QueryQuantile filter_by_euclidean_distance get_mean_euclidean_distance DistanceCache QueryTopDiverseByEuclideanDistance get_min_euclidean_distance QueryTopDiverseSubspace ArrTree RandomSplitTree HPDByInverseCDF rsforest_decision HSTree HSSplitter IForest hstree_decision rsforest_fit RSForest SplitContext Node HSTrees get_tree_partitions hstree_fit RSTree StackRecord RandomTreeBuilder SplitRecord RSForestSplitter RandomSplitForest SimpleActive test_kl_data_drift get_iforest_model test_kl_data_drift_classifier np_mat_to_str plot_dataset get_angles plot_angle_hist test_hyperplane_angles plot_rule_annotations test_aad_rules get_debug_args plot_selected_regions compute_n_found test_tree_detectors plot_labeled_value_hist plot_value_hist get_debug_args test_node_values make_ellipses g_MAD f_MAD plot_samples_pca transform_2D_data get_artificial_2D_data_uniform LabelDiffusion euclidean_dist get_confusion find_lt accumulate log_betabin sanity_check_bayesian_ruleset BayesianRuleset test_bayesian_ruleset Perceptron BinaryLinearSVMClassifier PairwiseLinearSVMClassifier Classifier MultiClassLinearSVMClassifier plot_rect_region plot_sidebar DataPlotter Predicate CmpGr ConjunctiveRule get_feature_meta_default Term convert_conjunctive_rule_to_feature_ranges evaluate_ruleset load_strings_from_file PredicateContext get_rule_satisfaction_matrix test_rule_apis CmpLr Var Atom convert_conjunctive_rules_to_feature_ranges check_if_at_least_one_rule_satisfied DType get_max_len_in_rules CmpGE CmpEq stack BinaryPredicate And convert_feature_ranges_to_rules traverse_predicate_conjunctions Expression Literal NumericContinuous UnaryPredicate conjunctive_predicate_to_list Factor CmpLE convert_conjunctive_rules_to_strings convert_strings_to_conjunctive_rules FeatureMetadata string_to_predicate save_strings_to_file Not evaluate_instances_for_predicate RuleParser Cmp Or dataframe_to_numpy load_data get_demo_samples plot_samples_and_lines get_sample_defs get_synthetic_samples normalize_and_center_by_feature_range load_donut_data interpolate_2D_line_by_slope_and_intercept generate_dependent_normal_samples read_anomaly_dataset load_face_data plot_sample get_hard_samples get_sphere_samples MVNParams interpolate_2D_line_by_point_and_vec AnomalyDataOpts fn_precision fn_auc dnn_layer AutoencoderAnomalyDetector leaky_relu dnn_construct PCA_TF MLPRegressor_TF get_train_batches Autoencoder DenseDNN sgdRMSPropNestorov avg_loss_check sgdAdam debug_log_sgd_losses sgd sgdMomentum get_sgd_batch get_num_batches sgdRMSProp generate_data g f get_loss_grad get_univariate_timeseries_data invert_difference_series_old invert_difference_series log_transform_series TsFileDef DiffScale prepare_tseries difference_series TSeries inverse_log_transform_series DTClassifier ecdf dataframe_to_matrix difftime InstanceList save Timer nrow constr_optim set_seed exception_to_string SKLClassifier RFClassifier rnorm rank rbind append normalize LogisticRegressionClassifier get_command_args read_resource_csv configure_logger append_instance_lists get_sample_feature_ranges rep SVMClassifier read_resource sample power get_option_list matrix dir_create load ncol cbind runif order matrix_rank get_random_item SetList quantile pnorm read_csv read_data_as_matrix autoencoder_ad autoencoder_visualize GanOpts Listener get_nn_layer set_random_seeds GAN get_gan_option_list get_cluster_labels fit_gmm GanListener read_dataset get_gan_sample_defs plot_log_likelihood test_samples plot_sample_hist plot_1D_gan_samples plot_2D_gan_samples get_normal_samples GanOpts Listener get_nn_layer set_random_seeds GAN get_train_batches get_gan_option_list get_cluster_labels fit_gmm GanListener read_dataset get_gan_sample_defs plot_log_likelihood test_samples plot_sample_hist plot_1D_gan_samples plot_2D_gan_samples get_normal_samples test_gan test_ano_gan get_gan_layer_defs get_iso_model test_ano_gan test_gan get_gan_layer_defs suppression_layer GladOpts partition_instances get_unlabeled_batches AFSS get_glad_option_list get_afss_batches construct_network get_glad_command_args afss_active_learn_ensemble glad_active_learn set_random_seeds set_results_dir get_afss_model GLADRelevanceDescriber SequentialResults AnomalyEnsemble to_2D_matrix AnomalyEnsembleLoda GLADEnsembleLimeExplainer test_get_afss_batches plot_glad_relevance_regions plot_dataset get_grid plot_weighted_scores prepare_loda_model_with_w plot_afss_scores prepare_loda_ensemble plot_scores test_loda get_top_ranked_instances plot_ensemble_scores populate_aad_opts aad_active_learn aad_active_learn_ensemble AadWithExisingEnsemble get_precomputed_aad_args process_glad_results get_glad_result_defs get_results get_glad_result_names test_glad get_synth_graph_adjacency read_graph_dataset plot_nodes test_create_gcn_default read_dataset test_tensorflow_array_differentiation plot_model_diagnostics test_edge_sample read_datasets_for_illustration plot_labels_with_modified_node plot_edges find_insts gradients_to_arrow_texts read_face_dataset_with_labels test_neighbor_gradients nodes_to_arrow_texts test_marked_nodes get_target_and_attack_nodes sample_indexes plot_graph test_robust_training_helper plot_arrow_texts read_synth_graph_dataset_with_labels SimpleGCNAttack get_nn_layer set_random_seeds euclidean_dist AdversarialUpdater sign SimpleGCN get_gcn_option_list GraphAdjacency GcnOpts EnsembleGCN NoopSampleUpdater SampleUpdater create_gcn_default test_gcn build_proj_hist get_all_hist_pdfs get_neg_ll_all_hist loda get_bin_for_equal_hist LodaModel pdf_hist get_zero_var_features LodaResult Loda ProjectionVectorsHistograms get_original_proj histogram_r_mod histogram_r pdf_hist_equal_bins HistogramR get_random_proj get_best_proj get_neg_ll get_param_sig Oracle plot_learning plot_original_feature_tsne ActivityRNN Casas write_sensor_data_as_document maybe_download_casas SimTs MA1 generate_synthetic_activity_data Sinusoidal read_activity_data write_to_file rolling_forecast_ARIMA time_lag_diff plot_lag_difference forecast_and_report_anomalies fit_ARIMA TsRNNCustom find_anomalies_with_regression TsRNN read_ts find_anomalies_with_shingles Word2vec CustomWord2vec join extend isfile append listdir int topK min maxbudget tau round budget topK min dot get_random_weights get_budget_topK quantile append zeros float max range MetricsStructure append zeros precision_k range len get_score read_data_as_matrix getLogger cumsum all_regions SequentialResults set_multi_run_options debug_qvals decision_function write_sequential_results_to_csv Timer get_queried_indexes summarize_ensemble_num_seen aad_unit_tests_battery str list aad_learn_ensemble_weights_with_budget len write_baseline_query_indexes randseed evaluate_forest_original transform_to_ensemble_features savetxt rbind output_all_data append sum budget fid get_aad_command_args RandomState message debug reruns get_runidxs init_weights get_alad_metrics_name_prefix datafile resultsdir get_aad_model zeros Ensemble join T get_uniform_weights str_opts is_forest_detector detector_type linesep argsort streaming AadOpts AadListenerForRules plot_tsne_queries configure_logger fit add_argument ArgumentParser parse_args argv get_aad_option_list nan range len append range len order zeros sum range len ndarray read_csv array append join load_samples dot max range len max list ncol dot rep append zeros sum array range len set_confusion_matrix convert_strings_to_conjunctive_rules where_satisfied convert_feature_ranges_to_rules sum enumerate BayesianRulesetsDescriber get_top_regions CompactDescriber prepare_conjunctive_rulesets describe get_feature_meta_default convert_conjunctive_rules_to_strings array RandomState fid runidx reruns randseed init_weights get_aad_model fit load_aad_model str list debug len is_forest_detector detector_type all_regions w modelfile train_aad_model stream_window update_model_from_buffer pretrain allow_stream_update update_weights_with_no_feedback get_sample_feature_ranges read_next_from_stream prepare_aad_model get_next_from_stream StreamingAnomalyDetector max_buffer init_query_state x move_buffer_to_unlabeled getLogger cumsum update_weights_with_no_feedback SequentialResults set_multi_run_options write_sequential_results_to_csv Timer max_buffer run_feedback seed move_buffer_to_unlabeled ones len randseed rbind append fid get_aad_command_args message debug allow_stream_update DataStream get_runidxs resultsdir matrix zeros IdServer max_windows update_model_from_buffer str_opts prepare_stream_anomaly_detector AadOpts get_next_from_stream configure_logger array read_data_as_matrix AadForest is_forest_detector detector_type AadLoda AadPrecomputed metrics cumsum labels rbind zeros range queried len save list min dot quantile append max range var str todense list debug dot shape sum array str T list arange debug add zeros matrix sum get_linear_score_variance message debug min get_closest_indexes SetList set Timer zeros array range enumerate len cumsum join debug get_alad_metrics_name_prefix savetxt resultsdir zeros enumerate stream_window join num_seen_baseline copy get_alad_metrics_name_prefix aucs stream_window_baseline savetxt true_queried_indexes resultsdir num_not_seen true_queried_indexes_baseline num_seen cumsum zeros queried len join ndarray isinstance write linesep close savetxt range flush open dump close open load close open save print get_metrics_path load get_next_plot close plot_points DataPlotter filedir join asarray plot_queries debug len resultsdir dataset read_csv queried plot_top_regions plot_query_diversity linspace vstack Timer plot_forest_contours_2D transform_to_ensemble_features evaluate_forest_original savetxt plot_aad_2D meshgrid budget d message debug write_sparsemat_to_file plot_anomalous_2D plot_dataset_2D join T print is_forest_detector detector_type plot_model_baseline_contours_2D len str list arccos debug pi dot get_random_weights zeros range len arange close bar histogram append get_next_plot DataPlotter estimate_qtau get_uniform_weights arccos topK plot_qval_hist debug min runidx pi dot get_budget_topK quantile float max range get_score Timer transform_to_ensemble_features shape contourf get_next_plot queried plot_points message debug close start matrix enumerate plot_sidebar reshape argsort array DataPlotter get_score plot_points plot_sidebar message debug reshape close w transform_to_ensemble_features start shape argsort contourf linspace Timer meshgrid get_next_plot DataPlotter get_score T d cumsum reshape ones argsort dot decision_function len get_score plot_points plot_sidebar ones cumsum print reshape matrix close dot argsort transform_to_ensemble_features shape contourf get_next_plot get_num_members DataPlotter plot_points plot_sidebar message reshape debug close argsort start decision_function shape contourf Timer get_next_plot matrix DataPlotter plot_points message debug is_forest_detector detector_type close regions_in_forest start plot_rect_region region Timer get_next_plot DataPlotter enumerate plot_points close plot_rect_region title region get_next_plot DataPlotter n_estimators debug is_forest_detector detector_type plot_selected_regions argsort start Timer get_instances_for_description message n_estimators debug is_forest_detector detector_type plot_selected_regions get_sample_feature_ranges start get_regions_for_description get_compact_regions Timer get_region_volumes get_score describe_n_top describe_volume_p get_regions_for_description Timer filter_by_euclidean_distance QueryTopDiverseSubspace plot_selected_regions get_region_volumes get_region_memberships message n_estimators debug filter_by_diversity w get_sample_feature_ranges start is_forest_detector detector_type argsort get_compact_regions zeros len T print matrix sum ilp join load_strings_from_file budget debug convert_strings_to_conjunctive_rules evaluate_ruleset dict mean rule_output_interval isfile append range len append items list sorted ndarray reshape vstack keys str debug extend dict summarize_values accumulate_values append sorted keys debug aggregate_rules_data get_alad_metrics_name_prefix set_multi_run_options get_runidxs load_rules append join savetxt resultsdir join debug asmatrix dict resultsdir read_csv join debug resultsdir plot xlabel ylabel ylim title legend append get_next_plot plot xlabel ylabel ylim title legend append get_next_plot max plot xlabel ylabel title legend append get_next_plot text xlim ylim get_next_plot xticks yticks get swap_metadata join get_feature_meta_default debug dataset load_all_rule_data read_data_as_matrix get swap_metadata load_summary plot2D plot_blank_image get_feature_meta_default debug plot_rule_lengths plot_scores write_all_summaries plot_num_rules dataset load_all_rule_data read_data_as_matrix replace print close print_readable_rules analyze_rules_dataset datafile resultsdir dataset DataPlotter budget get_score fn_auc cbind plot_rect_region region get_score plot_dataset linspace randseed transform_to_ensemble_features shape contourf update_weights meshgrid get_next_plot range plot_points RandomState debug describe_instances close copy w init_weights get_aad_model n_pretrain plot_sidebar reshape fit argsort get_auc array DataPlotter len plot_points DataPlotter plot_sidebar RandomForestAadWrapper plot_dataset reshape debug describe_instances close argsort shape clf contourf linspace meshgrid get_next_plot predict_prob_for_class fit DecisionTreeAadWrapper debug describe_instances plot_dataset debug close scatter plot_regions legend get_next_plot DataPlotter arange bayesian_rules get_region_memberships join CompactDescriber describe debug MinimumVolumeCoverDescriber append zeros sum BayesianRulesetsDescriber get_score cumsum get_initial_query_state update_query_state qtype str list ones randseed transform_to_ensemble_features order_by_score update_weights append RandomState debug describe_instances w init_weights get_aad_model print extend argsort array get_next_query fit range len zeros range enumerate is_in_region multiply d w multiply array get_region_ids len region zeros prod range enumerate len array queried update list set argsort array enumerate get_region_ids len list reshape len vstack region append zeros array enumerate is_in_region get_region_memberships reshape debug array matrix ilp HistogramPDFs get_all_hist_pdfs fmat append range len fn_auc dot append lbls empty range len fn_precision lbls dot zeros empty range len astype len minimum arange errorbar plot xlabel debug close ylabel ylim get_n_intermediate legend get_next_plot xlim max enumerate DataPlotter len max list num_anoms debug dataset plot_results dir_create get_results append get_result_defs get_result_names enumerate len arange plot xlabel len close ylabel ylim legend get_next_plot xlim enumerate DataPlotter dir_create dot argsort interpolate_2D_line_by_slope_and_intercept pi axhline get_sphere_samples get_x_tau xticks yticks set_aspect list ones axvline shape ylim scatter legend append get_next_plot normalize plot close interpolate_2D_line_by_point_and_vec xlim enumerate join dot array DataPlotter len interpolate_2D_line_by_slope_and_intercept vstack get_x_tau xticks yticks set_aspect list ones shape uniform ylim scatter legend append normalize get_next_plot mu mcorr plot close MVNParams interpolate_2D_line_by_point_and_vec xlim enumerate join dvar arctan reshape generate_dependent_normal_samples dot zeros array DataPlotter arange plot xlabel debug close ylabel ylim legend get_next_plot xlim enumerate DataPlotter len add set zeros range enumerate len len list set vstack append iter_by_window array range len get_original_labels arange cumsum axhline str std get_window_indexes ylabel ylim legend get_next_plot plot close mean sqrt plot_class_discovery xlim xlabel get_num_discovered_classes_per_batch get_queried DataPlotter get_score read_data_as_matrix getLogger cumsum str aad_learn_ensemble_weights_with_budget list len AadPrecomputed randseed transform_to_ensemble_features queried RandomState get_aad_command_args debug init_weights datafile resultsdir Ensemble get_uniform_weights str_opts AadOpts configure_logger fit add_dist get_dist sum len get_dist add_dist min sum Inf list extend delete get_mean_euclidean_distance argsort DistanceCache append zeros array enumerate get_min_euclidean_distance len ones int sum var ecdf x_cdf check_random_state arange min shuffle HSTree fit decision_function Timer RSTree check_random_state arange min shuffle fit decision_function Timer init_weights AadForest fit getLogger dataset max get_trees_to_replace seed list ones read_anomaly_dataset ylabel ylim update_model_from_stream_buffer scatter legend append get_next_plot range get_command_args get_iforest_model n_estimators plot debug close DataStream read_next_from_stream xlim IdServer int add_samples xlabel text get_node_sample_distributions get_KL_divergence_distribution configure_logger DataPlotter len RandomForestAadWrapper getLogger dataset RF get_trees_to_replace read_anomaly_dataset get_command_args n_estimators debug DataStream w read_next_from_stream IdServer int fit get_node_sample_distributions get_KL_divergence_distribution configure_logger len ylim unique xlim range arccos pi dot zeros range str list arange xlabel debug rc ylabel bar histogram get_next_plot get_score read_data_as_matrix all_regions pi seed str list len randseed transform_to_ensemble_features append RandomState get_aad_command_args arccos debug close w get_runidxs init_weights datafile resultsdir get_aad_model init power plot_angle_hist dir_create get_uniform_weights get_angles str_opts is_forest_detector detector_type argsort dot streaming get_auc AadOpts Perceptron configure_logger array DataPlotter fit xlabel ylabel scatter legend join text xlim ylim get_next_plot xticks yticks describe get_feature_meta_default evaluate_ruleset plot_rule_annotations dataset load_strings_from_file str get_rulesets read_anomaly_dataset plot_selected_regions names convert_conjunctive_rules_to_feature_ranges get BayesianRulesetsDescriber asarray CompactDescriber prepare_conjunctive_rulesets debug close detect_anomalies_and_describe resultsdir join print convert_conjunctive_rules_to_strings save_strings_to_file array DataPlotter argsort cumsum len get_score dataset str list transpose read_anomaly_dataset randseed transform_to_ensemble_features append RandomState debug hstack copy init_weights get_aad_model init enumerate get_uniform_weights fn_auc str_opts AadOpts zeros fit min bar histogram get_next_plot max str list arange debug bar histogram get_next_plot get_score read_data_as_matrix all_regions plot_labeled_value_hist seed plot_value_hist len randseed transform_to_ensemble_features d RandomState get_aad_command_args debug close init_weights datafile resultsdir get_aad_model init str_opts is_forest_detector detector_type streaming AadOpts configure_logger DataPlotter fit norm arctan2 Ellipse set_clip_box n_components pi add_artist eigh sqrt eye range diag bbox mean abs dot dot multiply transpose mean plot_points xlabel close ylabel get_next_plot DataPlotter hstack hstack sqrt sum sum array len isinstance lgamma print zip append next func iter bisect_left str precompute greedy_init screen_rules set_parameters debug compute_prob shape propose BayesianRuleset bayesian_pattern_based join asarray get_feature_meta_default print convert_strings_to_conjunctive_rules read_anomaly_dataset predicted_rules BayesianRuleset fit min Rectangle add_patch plot cbind ones add_patch Rectangle len compile parse compile isinstance append p2 p1 PredicateContext traverse_predicate_conjunctions items list join parse isfinite append val featuredefs predicates varindex dict is_continuous enumerate append parse zeros where_satisfied enumerate zeros where_satisfied precision_recall_fscore_support check_if_at_least_one_rule_satisfied Factor FeatureMetadata unique append range evaluate str asarray parse get_feature_meta_default print set_confusion_matrix where_satisfied read_anomaly_dataset evaluate_instances_for_predicate RuleParser compile StringIO read_csv zeros range isinstance mvn normal rvs fill_diagonal reshape diag copy dot sqrt range ones hstack uniform vstack append zeros list mcorr dvar generate_dependent_normal_samples extend rbind get_sample_defs zeros range mu len list get_synthetic_samples max read_resource_csv debug min shape zeros array AnomalyDataOpts read_resource_csv dataframe_to_matrix read_data_as_matrix mean min max max read_resource_csv debug min shape array load_face_data get_synthetic_samples load_donut_data plot_points plot xlabel close ylabel ylim scatter legend get_next_plot xlim DataPlotter enumerate plot_points xlabel close ylabel get_next_plot DataPlotter nrow float sum range minimum list cumsum maximum range extend rank nrow float sum max zeros len list min arange shuffle range min arange mean debug Timer inf arange debug f grad shuffle copy debug_log_sgd_losses dot mean isnan copyto get_sgd_batch get_num_batches zeros range arange Timer copyto multiply f get_num_batches range inf debug grad shuffle copy mean sqrt get_sgd_batch debug_log_sgd_losses isnan dot zeros len Timer inf arange debug f grad shuffle copy debug_log_sgd_losses mean isnan dot copyto get_sgd_batch get_num_batches zeros range len arange Timer copyto multiply f get_num_batches range inf debug grad shuffle copy mean sqrt get_sgd_batch debug_log_sgd_losses isnan dot zeros len arange Timer copyto multiply f get_num_batches range inf debug grad shuffle copy mean sqrt get_sgd_batch debug_log_sgd_losses isnan dot zeros len normal sort dot uniform zeros range mean dot mean multiply transpose path read_resource_csv log exp shape range zeros shape range zeros shape range zeros add_argument ArgumentParser parse_args get_option_list argv seed shape float int isinstance list Ranking ranks argsort FRACTIONAL COMPETITION empty array shuffle min max extend isinstance csr_matrix isinstance timedelta dot sqrt randint x_transformed y ids vstack rbind append get_data print BytesIO read_csv read_resource zeros array range startcol datafile read_csv labelindex dump open open makedirs basicConfig getLogger minimize debug nrow array range plot_points PCA_TF DiffScale close Autoencoder title transform get_next_plot dataset fit_transform DataPlotter fit AutoencoderAnomalyDetector debug fn_auc hstack decision_function max fit seed set_random_seed infty bic append GaussianMixture range fit debug n_components predict fit_gmm add_argument ArgumentParser list mcorr dvar get_gan_sample_defs reshape generate_dependent_normal_samples append zeros mu enumerate load_donut_data ones read_anomaly_dataset load_face_data append zeros dataset get_normal_samples arange plot xlabel ylabel scatter histogram unique legend enumerate ones close plot_sample_hist get_next_plot zeros DataPlotter plot_points make_ellipses close ylim get_next_plot xlim DataPlotter errorbar plot close get_next_plot DataPlotter str read_dataset debug close shape plot_sample_hist get_cluster_labels plot_2D_gan_samples get_next_plot zeros DataPlotter min max str list get_anomaly_score append get_next_plot range plot_points plot debug subtract close enumerate join text min argsort zeros array n_ano_gan_test DataPlotter len ano_gan read_dataset get_gan_layer_defs conditional plot_2D_gan_samples str list get_gen_output_samples len get_gen_input_samples sum range plot plot_log_likelihood debug GAN init_session unique plot_1D_gan_samples join test_ano_gan save_session eye get_cluster_labels lls fit reshape randseed decision_function IsolationForest fit get_iso_model Timer shape message iso_gan list ones min arange shuffle range arange ones debug min shuffle argsort repeat vstack range len add_argument ArgumentParser parse_args get_glad_option_list argv cumsum init_network SequentialResults Timer str list close_session ones get_weighted_scores update_afss range budget plot_glad_relevance_regions message get_afss_model debug explain plot_weighted_scores plot_afss_scores get_scores extend argsort get_first_vals_not_marked array GLADEnsembleLimeExplainer SequentialResults Timer dataset set_random_seeds read_anomaly_dataset randseed afss_active_learn_ensemble results_dir append range message set_results_dir debug reruns prepare_loda_ensemble merge dir_create m write_to_csv str max_afss_epochs debug afss_l2_lambda extend AFSS append afss_nodes max len list interpolate_2D_line_by_point_and_vec plot array keys range meshgrid linspace reshape get_grid colorbar shape decision_function set_ylabel contourf fn_score_transform plot_dataset debug close plot_scores get_scores DataPlotter get_next_plot get_ensemble_type arange plot_dataset reshape get_grid close colorbar shape decision_function set_ylabel contourf get_scores range DataPlotter get_next_plot str list plot_dataset debug get_grid reshape close colorbar get_weighted_scores argsort shape set_ylabel contourf get_scores DataPlotter get_next_plot describe plot_dataset GLADRelevanceDescriber close plot_rect_region title get_member_relevance_scores_ranks get_projections get_next_plot DataPlotter enumerate build_proj_hist LodaModel arange LodaResult Loda ProjectionVectorsHistograms get_neg_ll_all_hist str message debug transpose prepare_loda_model_with_w dot sqrt shape Loda Timer sum diag fit argsort decision_function str list partition_instances debug read_anomaly_dataset prepare_loda_ensemble get_top_ranked_instances get_afss_batches get_scores dataset set_results_dir plot read_anomaly_dataset plot_ensemble_scores prepare_loda_ensemble results_dir get_top_ranked_instances dataset dir_create reruns afss_tau budget get_score cumsum get_initial_query_state SequentialResults update_query_state qtype str list ones transform_to_ensemble_features order_by_score update_weights append debug w init_weights AadWithExisingEnsemble extend argsort array get_next_query set_random_seeds message debug SequentialResults randseed aad_active_learn_ensemble Timer m ensemble_type write_to_csv enumerate merge mean subtract get_per_run_results std get_glad_result_defs list num_anoms get_glad_result_names debug plot_results set dir_create get_results append dataset max enumerate values len init_network get_top_ranked_instances dataset str set_random_seeds close_session read_anomaly_dataset get_weighted_scores randseed shape results_dir plot_ensemble_scores update_afss range set_results_dir plot get_afss_model debug plot_weighted_scores plot_afss_scores prepare_loda_ensemble get_scores dir_create log_probability_ranges get_demo_samples vstack all where array array shuffle int check_random_state arange check_random_state read_face_dataset_with_labels debug where sample_indexes read_synth_graph_dataset_with_labels len get_synth_graph_adjacency GraphAdjacency read_dataset build_adjacency read_graph_dataset dataset append append get text arrow plot_points plot_arrow_texts enumerate range plot get_next_plot title plot_nodes plot_edges append debug find_insts str Variable debug concat matmul placeholder zeros get_target_and_attack_nodes debug close read_datasets_for_illustration plot_graph unique get_opts_name_prefix dataset DataPlotter len get_target_and_attack_nodes sample_edges debug close read_datasets_for_illustration plot_graph GraphAdjacency unique n_neighbors get_opts_name_prefix dataset range DataPlotter len fit_x fit_A copy plot_graph array gradients_to_arrow_texts DataPlotter plot_labels_with_modified_node fit_y fit_x close extend fit_A nodes_to_arrow_texts plot_graph modify_gcn_and_predict gcn array predict create_gcn_default debug get_f1_score read_datasets_for_illustration max fit plot_model_diagnostics Timer get_opts_name_prefix suggest_nodes max read_datasets_for_illustration SimpleGCNAttack get_f1_score sample_neighbors append message debug GraphAdjacency n_vulnerable enumerate fit fit_A dict get_top_uncertain_nodes create_gcn_default str create_gcn_default debug len get_f1_score AdversarialUpdater read_datasets_for_illustration restore_values select_and_update_nodes max fit n_sub_layers n_layers SimpleGCN append EnsembleGCN range ensemble add_argument ArgumentParser SimpleGCNAttack create_gcn_default get_target_and_attack_nodes close_session print debug fit get_f1_score suggest_nodes get_opts_name_prefix shape find_minimum_modification plot_model_diagnostics append zeros dataset max read_datasets_for_illustration argmax int arange debug HistogramR isfinite range histogram zeros sum log len int list message debug HistogramR isfinite range histogram Timer append zeros sum array log len trunc array density len density breaks zeros max range get_bin_for_equal_hist len int arange randn min extend delete sqrt floor sample zeros sum range len ncol dot append range histogram_r_mod dot zeros pdf_hist_equal_bins pdf_hist dot zeros range len get_all_hist_pdfs vfunc vectorize log ncol get_random_proj ones mean append nrow abs get_neg_ll zeros Inf append zeros ncol range Timer ncol hists message arange debug w get_neg_ll_all_hist get_best_proj nrow get_original_proj max append ncol range interpolate_2D_line_by_slope_and_intercept get_x_tau xticks yticks set_aspect str list title ylim scatter tau legend fixed_tau append get_next_plot plot debug w tau_relative interpolate_2D_line_by_point_and_vec xlim enumerate text dot array get_batches plot_points TSNE reshape debug algo close get_next_plot fit_transform DataPlotter join urlretrieve exists stat insert transpose hstack savetxt len rvs arange get_samples write_to_file argmax str list axvline title append get_next_plot range plot debug set_xlim close multinomial zeros array DataPlotter len asarray read_csv zeros len range shape to_datetime arange range len get_univariate_timeseries_data arange plot close time_lag_diff title get_next_plot dataset array DataPlotter len arange str list axvline log_transform time_lag_diff title fit_ARIMA get_next_plot pacf get_univariate_timeseries_data log_transform_series plot debug close tight_layout ARIMA_order int print rolling_forecast_ARIMA acf array DataPlotter len arange inverse_transform abs get_batches str list DiffScale len axvline SVR prepare_tseries append get_next_plot MLPRegressor_SK fit_transform range plot debug hstack set_xlim close scale int RandomForestRegressor invert_difference_series reshape MLPRegressor_TF difference_series DataPlotter fit arange TSeries max get_shingles str OneClassSVM samples get_next_plot AutoencoderAnomalyDetector log_transform_series plot LocalOutlierFactor debug set_xlim close get_sample_feature_ranges IsolationForest reshape difference_series DataPlotter fit list get_univariate_timeseries_data read_resource_csv asarray print array append keys | Python libraries required: -------------------------- numpy (1.14.2) scipy (1.0.0) scikit-learn (0.19.1) cvxopt (1.1.9) pandas (0.22.0) ranking (0.3.1) statsmodels (0.9.0) matplotlib (2.1.0) | 2,130 |
freelunchtheorem/Conditional_Density_Estimation | ['density estimation'] | ['Noise Regularization for Conditional Density Estimation', 'Conditional Density Estimation with Neural Networks: Best Practices and Benchmarks'] | cde/evaluation/eurostoxx_eval/load_dataset.py cde/density_estimator/NKDE.py cde/evaluation/simulation_eval/plotting/question1_v1_plots.py cde/evaluation/eurostoxx_eval/noise_reg_plots.py cde/density_simulation/__init__.py cde/evaluation/simulation_eval/base_experiment.py cde/utils/integration.py cde/utils/misc.py cde/evaluation/simulation_eval/question2_entropy_reg.py cde/utils/tf_utils/network.py cde/density_simulation/LinearGaussian.py cde/evaluation/simulation_eval/question6_noise_schedules.py cde/utils/serializable.py cde/utils/tf_utils/parameterized.py demo.py cde/utils/tf_utils/adamW.py cde/model_fitting/sim_eval.py setup.py cde/density_estimator/BaseNNMixtureEstimator.py config.py tests/unittests_evaluations.py cde/density_simulation/EconDensity.py cde/density_estimator/CKDE.py cde/utils/tf_utils/layers_powered.py cde/utils/distribution.py cde/evaluation/empirical_eval/experiment_util.py cde/evaluation/simulation_eval/question5_regularisation_KMN.py cde/evaluation/empirical_eval/benchmark_empirical_kde.py cde/density_simulation/toy_densities.py cde/evaluation/simulation_eval/question5_regularisation_NF.py cde/__init__.py cde/evaluation/eurostoxx_eval/empirical_benchmark.py cde/BaseConditionalDensity.py cde/evaluation/simulation_eval/question4_benchmark_student5dim.py tests/unittests_utils.py tests/unittests_estimators.py cde/evaluation/simulation_eval/plotting/question6_plots.py cde/evaluation/simulation_eval/question4_benchmark_econ_density.py cde/utils/tf_utils/tensor_utils.py cde/evaluation/eurostoxx_eval/moments_time_series.py docs/source/conf.py cde/evaluation/simulation_eval/hyperparam_sweep_nonparametrics.py cde/density_estimator/normalizing_flows/BaseNormalizingFlow.py cde/density_estimator/normalizing_flows/__init__.py tests/unittests_io.py cde/evaluation/simulation_eval/plotting/question7_plots.py cde/density_estimator/NF.py cde/utils/optimizers.py cde/density_estimator/normalizing_flows/RadialFlow.py cde/model_fitting/ConfigRunnerLogProb.py cde/evaluation/simulation_eval/question1_noise_reg_xy.py cde/density_estimator/normalizing_flows/PlanarFlow.py cde/evaluation/simulation_eval/plotting/question5_plots.py cde/evaluation/simulation_eval/question4_benchmark_skew.py cde/density_simulation/LinearStudentT.py cde/density_simulation/GMM.py cde/evaluation/simulation_eval/plotting/question1_plots.py cde/model_fitting/plotting.py cde/utils/async_executor.py cde/evaluation/simulation_eval/question3_NNvsCKDE_Econ_GMM.py cde/evaluation/simulation_eval/question7_regularization_logprobs.py cde/model_fitting/divergences.py cde/evaluation/simulation_eval/plotting/question8_plots.py cde/evaluation/simulation_eval/hyperparam_sweep.py cde/density_estimator/MDN.py cde/model_fitting/GoodnessOfFitSingleResult.py cde/evaluation/simulation_eval/question4_benchmark_student10dim.py cde/evaluation/simulation_eval/question5_regularisation_MDN.py cde/model_fitting/GoodnessOfFitResults.py tests/unittests_simulations.py cde/density_simulation/ArmaJump.py cde/density_estimator/LSCDE.py tests/unittests_configrunner.py cde/utils/tf_utils/layers.py cde/evaluation/simulation_eval/plotting/question2_plots.py cde/evaluation/eurostoxx_eval/fit_density.py cde/evaluation/eurostoxx_eval/feature_selection.py cde/model_fitting/ConfigRunner.py cde/evaluation/empirical_eval/regularization_empirical.py cde/utils/tf_utils/map_inference.py cde/density_simulation/JumpDiffusionModel.py cde/model_fitting/GoodnessOfFit.py cde/evaluation/empirical_eval/benchmark_empirical.py cde/evaluation/simulation_eval/question8_benchmark.py cde/density_estimator/normalizing_flows/IdentityFlow.py cde/density_simulation/BaseConditionalDensitySimulation.py cde/density_estimator/__init__.py cde/evaluation/simulation_eval/question4_benchmark_arma_jump.py cde/evaluation/simulation_eval/plotting/hyperparam_sweep_plots.py cde/density_simulation/SkewNormal.py cde/density_estimator/BaseNNEstimator.py cde/utils/center_point_select.py cde/model_fitting/GoodnessOfFitLogProb.py tests/dummies.py cde/evaluation/simulation_eval/plotting/question4_plots.py cde/evaluation/empirical_eval/datasets.py cde/evaluation/eurostoxx_eval/underest_of_variance.py cde/density_estimator/KMN.py cde/density_estimator/BaseDensityEstimator.py cde/utils/io.py tests/unittests_normalizing_flows.py cde/density_estimator/normalizing_flows/AffineFlow.py tests/__init__.py cde/evaluation/simulation_eval/plotting/question3_plots.py cde/evaluation/simulation_eval/question3_NNvsCKDE_Arma_Skew.py ConditionalDensity BaseDensityEstimator BaseNNEstimator BaseNNMixtureEstimator ConditionalKernelDensityEstimation KernelMixtureNetwork LSConditionalDensityEstimation MixtureDensityNetwork NormalizingFlowEstimator NeighborKernelDensityEstimation AffineFlow BaseNormalizingFlow IdentityFlow InvertedPlanarFlow InvertedRadialFlow ArmaJump BaseConditionalDensitySimulation EconDensity GaussianMixture JumpDiffusionModel LinearGaussian LinearStudentT _sigmoid SkewNormal sigmoid build_toy_dataset build_toy_dataset2 Rule_of_thumb Polynomial_Rate experiment experiment NCYTaxiDropoffPredict _UCI Yacht UCI_Dataset Energy _convert_to_day_minute BostonHousing _process_time EuroStoxx50 WineRed WineWhite Protein Power Conrete Dataset _evaluate_params _initialize_model_cv_ml _initialize_model_cv run_benchmark_train_test_fit_cv run_benchmark_train_test_fit_cv_ml Rule_of_thumb Polynomial_Rate experiment initialize_models run_benchmark_train_test_cv_ml train_valid_split run_benchmark_train_test empirical_evaluation empirical_benchmark run_benchmark_train_test_fit_by_cv cv_param_search main pca_comp main pca_var_explained _make_realized_vol_df _make_risk_free_df target_feature_split _make_fama_french_mom_df _make_return_df _compute_frama_french_factor_risk _make_riskneutral_df _make_exp_tail_variation_df make_overall_eurostoxx_df _make_fama_french_df _make_variance_risk_premium_df main estimate_cov launch_logprob_experiment launch_experiment question1 question1 question1 question2 question3 question3 question4 question4 question4 question4 question4 question5 question5 question5 question6 question7 question8 _resize_plots _hash_task_dict ConfigRunner load_dumped_estimators make_hash_sha256 load_dumped_estimator _create_configurations _make_hashable _add_seeds_to_sim_params ConfigRunnerLogProb divergence_measures_pdf kl_divergence_pdf hellinger_distance_pdf js_divergence_pdf _divergence_mc _make_2d GoodnessOfFit sample_x_cond GoodnessOfFitLogProb GoodnessOfFitResults GoodnessOfFitSingleResult comparison_plot2d_sim_est fit_and_plot_estimated_vs_original_2D plot_dumped_model get_density_plots eval_econ_data generate_report main eval1 plot_fitted_distribution execute_batch_async_pdf LoopExecutor _split_into_batches _start_process _dummy_fun AsyncExecutor sample_center_points _standard_student_t_pdf batched_univ_t_pdf multivariate_t_rvs multidim_t_rvs multidim_t_pdf batched_univ_t_cdf batched_univ_t_rvs NoStdStreams mc_integration_student_t numeric_integation dump_as_pickle store_dataframe store_objects get_full_path store_csv append_result_to_csv load_time_series_csv _project_to_pos_semi_def project_to_pos_semi_def is_pos_def take take_of_type norm_along_axis_1 AdamOptimizer find_root_by_bounding find_root_newton_method Serializable DecoupledWeightDecayExtension AdamWOptimizer extend_with_decoupled_weight_decay MomentumWOptimizer MergeLayer LSTMLayer DropoutLayer Pool2DLayer as_tuple GRUStepLayer DenseLayer create_param ElemwiseSumLayer GaussianNoiseLayer ReshapeLayer conv_output_length VariableLayer Conv2DLayer NormalizationLayer pool_output_length SliceLayer SpatialExpectedSoftmaxLayer XavierUniformInitializer apply_ln NonlinearityLayer OpLayer OrthogonalInitializer GRULayer TfBasicLSTMLayer batch_norm BatchNormLayer py_ortho_init DimshuffleLayer unique InputLayer spatial_expected_softmax PseudoLSTMLayer FlattenLayer TfGRULayer get_output G ParamLayer ConcatLayer get_all_params LSTMStepLayer BaseConvLayer HeUniformInitializer get_all_layers Layer LayersPowered MAP_inference GRUNetwork LSTMNetwork MLP suppress_params_loading Parameterized JointParameterized unflatten_tensors flatten_tensors SimulationDummy GaussianDummy SkewNormalDummy configrunner TestConditionalDensityEstimators_fit_by_crossval TestRegularization TestConditionalDensityEstimators_2d_gaussian TestLogProbability TestSerializationDensityEstimators TestDivergenceMeasures TestRiskMeasures _kl_gaussians _hellinger_gaussians TestIO TestFitByCrossval Test_NF_2d_gaussian TestSerialization TestRegularization TestLogProbability TestFlows TestMultiModal covariance_pdf TestArmaJump TetsSkewNormal TestJumpDiffusionModel TestGaussianMixture TestRiskMeasures mean_pdf TestEconDensity TestLinearStudentT TestIntegration TestExecAsyncBatch TestHelpers TestDistribution normal T float32 sin train_test_split T train_test_split cos sin str configure NCYTaxiDropoffPredict concat log run_benchmark_train_test_fit_cv EuroStoxx50 dataset_class run_benchmark_train_test_fit_cv_ml append hour weekday minute rescale float strptime _convert_to_day_minute total_seconds log values AsyncExecutor run str list from_dict pprint append update configure RandomState mean Manager zip keys enumerate extend dict randint std str list configure RandomState items from_dict dict Manager pprint nanmean nanstd zip append keys log AsyncExecutor run pop items list log pop deepcopy LoopExecutor run tolatex int dropna make_overall_eurostoxx_df fit_by_cv target_feature_split train_valid_split fit_by_cv log_pdf train_valid_split mean_std mean sqrt abs target_feature_split fit items list time from_dict print Manager eval zip run range AsyncExecutor len pop items list replace print append enumerate initialize_models list items to_latex print OrderedDict empirical_benchmark initialize_models list items to_latex print OrderedDict empirical_benchmark cv_param_search initialize_models list items to_latex print OrderedDict empirical_benchmark subplots pdf linspace tick_params values make_overall_eurostoxx_df list set_title set_xlabel len shape title savefig legend set_color range plot2d plot tight_layout mean MixtureDensityNetwork enumerate join set_ylabel zeros array target_feature_split fit PCA PCA fit pca_comp shift load_time_series_csv log lastprice load_time_series_csv log load_time_series_csv update list index set dict zip append dropna DataFrame load_time_series_csv load_time_series_csv sum _make_realized_vol_df join _make_risk_free_df _make_fama_french_mom_df _make_return_df _compute_frama_french_factor_risk _make_riskneutral_df _make_fama_french_df print dropna array mean_ _kurtosis_mc DataFrame covariance squeeze stack _skewness_mc print to_csv index read_csv makedirs print add_argument ConfigRunner ArgumentParser run_configurations append parse_args add_argument ConfigRunnerLogProb run_configurations ArgumentParser parse_args logspace logspace logspace logspace logspace logspace logspace logspace set_ylim print extend list product _make_hashable deepcopy isinstance update encode sha256 items list isscalar _divergence_mc _divergence_mc reshape pdf mc_integration_student_t _determine_mc_proposal_dist tile zeros ndim_y range percentile RandomState hstack uniform append expand_dims range simulate show subplots plot print xlabel ylabel pdf linspace legend cdf array range append len configure graph generate_results_dataframe GoodnessOfFitResults dict load_pkl_log KEYS_OF_INTEREST Session fit_by_cv asarray EconDensity simulate pdf NeighborKernelDensityEstimation linspace GoodnessOfFit compute_results EconDensity print report_dict GaussianMixture KernelMixtureNetwork seed normal show asarray plot simulate print score pdf plot_surface gca linspace figure meshgrid expand_dims KernelMixtureNetwork fit subplots predict_density Line2D plot_loss y_max linspace show transpose regplot shape savefig KernelMixtureNetwork format plot sample y_min add_line set_size_inches print jointplot reshape build_econ1_dataset ravel fit eval_econ_data int concatenate _split_into_batches cpu_count dict Manager max AsyncExecutor run start Process RandomState euclidean_distances Series concat min fit delete cosine_distances KMeans cluster_centers_ labels_ append expand_dims DataFrame range values AgglomerativeClustering _standard_student_t_pdf stdtr standard_t RandomState gamma sum prod pi asarray RandomState multivariate_normal chisquare zeros len exp gammaln squeeze func linspace trapz int Number isinstance ones mean stack func multidim_t_rvs tile multidim_t_pdf expand_dims range append get_full_path print to_pickle get_full_path print to_csv get_full_path print dump format print name abspath name closed to_csv join strftime str list set_index sort_index to_datetime read_csv values norm square sqrt zeros sum range _project_to_pos_semi_def range grad fun warn mean AdamOptimizer abs clip fun warn flatten mean abs tuple isinstance exp value reduce_max reduce_sum linspace append expand_dims update appendleft extendleft hasattr popleft input_layers input_layer set add reversed deque append BatchNormLayer NonlinearityLayer getattr identity update convert_to_tensor getfullargspec join isinstance get_close_matches warn set dict keys append get_all_layers get_output_for append add set get_all_layers from_iterable list prod map dot det trace log dot det exp reshape mc_integration_student_t tile zeros range reshape mean_ mc_integration_student_t tile zeros range | [](https://travis-ci.org/freelunchtheorem/Conditional_Density_Estimation) [](https://pepy.tech/project/cde) # Conditional Density Estimation (CDE) ## Description Implementations of various methods for conditional density estimation * **Parametric neural network based methods** * Mixture Density Network (MDN) * Kernel Mixture Network (KMN) * Normalizing Flows (NF) * **Nonparametric methods** * Conditional Kernel Density Estimation (CKDE) | 2,131 |
friendshipkim/neuron-merging | ['network pruning'] | ['Neuron Merging: Compensating for Pruned Neurons'] | decompose.py models/WideResNet.py models/LeNet_300_100.py models/__init__.py models/ResNet.py main.py models/VGG.py Decompose create_scaling_mat_ip_thres_bias create_scaling_mat_conv_thres_bn test weight_init adjust_learning_rate save_state train LeNet_300_100 ResNet conv3x3 BasicBlock Bottleneck VGG BasicBlock NetworkBlock WideResNet norm pairwise_distances where zeros max range norm reshape min argmin pairwise_distances where array append zeros abs max range pop join list print save keys data format criterion model backward print dataset zero_grad step cuda enumerate len format model print batch_size dataset eval save_state float cuda len print param_groups lr zip pop copy_ state_dict | # Neuron Merging: Compensating for Pruned Neurons Pytorch implementation of **Neuron Merging: Compensating for Pruned Neurons**, accepted at 34th Conference on Neural Information Processing Systems (NeurIPS 2020).  ## Requirements To install requirements: ```setup conda env create -f ./environment.yml ``` Python environment & main libraries: * python 3.8 | 2,132 |
frobelbest/BANet | ['depth and camera motion'] | ['BA-Net: Dense Bundle Adjustment Network'] | legacy/deeptam/python/deeptam_tracker/utils/view_utils.py dec.py legacy/feat.py legacy/deeptam/python/deeptam_tracker/evaluation/rgbd_benchmark/evaluate_ate.py legacy/deeptam/python/deeptam_tracker/utils/datatypes.py legacy/deeptam/python/deeptam_tracker/utils/rotation_conversion.py legacy/deeptam/python/deeptam_tracker/utils/vis_utils.py legacy/deeptam/python/deeptam_tracker/models/helpers.py legacy/deeptam/python/deeptam_tracker/evaluation/rgbd_sequence.py enc.py legacy/deeptam/examples/eval2.py legacy/deeptam/python/deeptam_tracker/evaluation/metrics.py legacy/deeptam/python/deeptam_tracker/evaluation/rgbd_benchmark/evaluate_rpe.py legacy/seq_example.py legacy/eval.py legacy/deeptam/python/deeptam_tracker/tracker.py legacy/deeptam/python/deeptam_tracker/models/networks.py bundlenet.py legacy/utils_python.py legacy/deeptam/python/deeptam_tracker/utils/helpers.py legacy/ba.py legacy/example.py legacy/deeptam/python/deeptam_tracker/models/networks_base.py legacy/deeptam/python/deeptam_tracker/evaluation/rgbd_benchmark/associate.py legacy/deeptam/examples/example_advanced_sequence.py legacy/deeptam/python/deeptam_tracker/models/blocks.py legacy/deeptam/examples/example_basic.py rotation2quaternion equation_construction_gradient CameraJacobianMatrix AngleaAxisRotation VMatrix DepthJacobianMatrix BundleNet _DepthwiseConv2DNativeBackpropInputGrad upsample DLA batch_norm building_block symmetric_padding conv2d batch_norm_relu DRN bottleneck_block projection_shortcut _block Tracker rotation2quaternion3D valid_point_and_depth2 drawCorrespondences load_pair valid_point_and_depth load_data readPFM rotation2quaternion3D valid_point_and_depth Pyramid batch_norm building_block batch_norm_selu conv2d reflect_padding batch_norm_relu DRN bottleneck_block projection_shortcut _block rotation2quaternion3D drawCorrespondences valid_point_and_depth readPFM readlist interpolate2d2 interpolate2d translate interpolate interpolate2d3 translations_to_projective_transforms rotation2quaternion3D valid_point_and_depth2 drawCorrespondences load_pair valid_point_and_depth load_data update_visualization main init_visualization track_rgbd_sequence main simple_evaluation simple_visualization Tracker TrackerCore rgbd_ate angle_diff rgbd_rpe position_diff RGBDSequence read_file_list associate evaluate_ate align plot_traj percentile transform44 compute_distance read_trajectory distances_along_trajectory evaluate_trajectory scale rotations_along_trajectory compute_angle ominus evaluate_rpe find_closest_index _refine _upsample_prediction create_flow_inputs_and_gt motion_block _predict_flow flow_block convert_NCHW_to_NHWC convrelu apply_motion_increment resize_area_NCHW convrelu2 default_weights_initializer fcrelu myLeakyRelu convert_NHWC_to_NCHW conv2d scale_tensor resize_nearest_neighbor_NCHW TrackingNetwork TrackingNetworkBase Pose_identity optimistic_restore load_myNetworks_module_noname load_myNetworks_module angleaxis_to_angle_axis numpy_to_Vector3 angleaxis_to_rotation_matrix rotation_matrix_to_angleaxis angleaxis_to_quaternion safe_crop_image adjust_intrinsics safe_crop_array2d convert_array_to_colorimg convert_array_to_grayimg convert_between_c2w_w2c equation_construction_grad int get_shape constant astype float32 depthwise_conv2d_native_backprop_input pad tile batch_normalization batch_norm relu symmetric_padding flatten sqrt stack norm sort range logical_and square greater flatten sqrt randint Sobel sum CV_32F asarray imwrite reshape rand matmul copy flatten COLOR_RGB2BGR cvtColor range circle asarray reshape square matmul range flatten sqrt append randint Sobel sum CV_32F decode list rstrip reshape map groups match flipud fromfile float open selu batch_norm reflect_padding asarray float64 reshape readlines close astype flatten open append split to_float int reshape logical_and equal gather expand_dims floor cast tile clip_by_value add_n zeros float range to_float get_shape int reshape logical_and equal gather expand_dims floor cast tile clip_by_value add_n zeros float range to_float get_shape int reshape logical_and equal gather expand_dims floor cast tile clip_by_value add_n zeros float range get_shape int reshape gather expand_dims floor cast tile clip_by_value add_n zeros float range set_size_inches set_title plot suptitle add_subplot set_visible set_zlim figure legend set_title plot squeeze pause difference imshow convert_array_to_colorimg array cla update_visualization image_height RGBDSequence show feed_frame append range format set_init_pose Tracker get_timestamp get_sun3d_intrinsics clear rgbd_rpe print poses init_visualization image_width get_sequence_length get_dict join track_rgbd_sequence data_dir add_argument dirname ArgumentParser parse_args print angle_diff position_diff format show subplot format set_title print squeeze difference imshow set_visible convert_array_to_colorimg format set_keyframe print image_width simple_evaluation get_dict image_height get_sun3d_intrinsics compute_current_pose RGBDSequence TrackerCore simple_visualization dot row remove format close mkstemp write_rgbd_pose_format evaluate_rpe split remove format evaluate_ate close mkstemp write_rgbd_pose_format split read split open list remove sort append keys svd transpose identity set_printoptions mean zeros matrix range plot sort append median range len max_difference align add_subplot verbose read_file_list ArgumentParser save second_file max open list A use offset plot_traj transpose set_xlabel save_associations savefig legend parse_args first_file plot close mean sqrt zip float keys join print sort add_argument min write associate dot set_ylabel figure median std len dot array outer read write isnan dict open enumerate append split int abs len append sort list keys append sort list keys median list sort pi sample compute_distance distances_along_trajectory scale append rotations_along_trajectory compute_angle ominus keys range find_closest_index len sort list add_subplot pi verbose evaluate_trajectory ArgumentParser save max open seed use max_pairs offset fixed_delta set_xlabel delta read_trajectory savefig groundtruth_file parse_args estimated_file plot close mean sqrt scale float int join print add_argument write min dot set_ylabel delta_unit figure median std len transfer_key_frame2 stop_gradient concat replace_nonfinite conv2d convrelu conv2d_transpose as_list list conv2d_transpose print slice min append conv2d isinstance as_list int constant convert_NCHW_to_NHWC pow int32 resize_nearest_neighbor squeeze angle_axis_to_rotation_matrix matmul add rotation_matrix_to_angle_axis expand_dims join exec_module module_from_spec dirname spec_from_file_location format sorted NewCheckpointReader restore global_variables Saver dbg get_variable_to_shape_map float64 astype norm normalized Vector3 angleaxis_to_angle_axis toRotationMatrix Vector3 float64 astype angleaxis_to_quaternion toAngleAxis new crop paste mode full int height print safe_crop_image crop astype width float32 resize depth round max uint8 astype rollaxis copy uint8 astype nan copy Vector3 R transpose t inverse Identity | # BANet Source Code for the Paper BA-Net: Dense Bundle Adjustment Network(Under Construction, but you are welcome to contact me [[email protected]]([email protected]) if you need to test your own data for comparisons.) | 2,133 |
fromm-m/aaai2021-am-peer-reviews | ['argument mining'] | ['Argument Mining Driven Analysis of Peer-Reviews'] | 2_clean_data/executables/clean_dataset_iclr19.py 5_model_works/sentence_level/executables/s_data_preprocess.py 2_clean_data/executables/cleaning_helper.py 5_model_works/token_level/executables/t_eval.py 2_clean_data/executables/clean_dataset_neuroai19.py 3_annotation_study/executables/clean_annotation.py 2_clean_data/executables/clean_dataset_midl19.py 4_post_processing/exploratory_data_analysis/human_accuracy_token.py 4_post_processing/exploratory_data_analysis/human_accuracy_sentence.py 1_scrape_data/executables/scrap_data.py 5_model_works/sentence_level/executables/s_predict.py 4_post_processing/executables/new_preparation.py 5_model_works/token_level/executables/t_data_postprocess.py 4_post_processing/executables/calculate_sentence_position.py 2_clean_data/executables/clean_dataset_iclr20.py 3_annotation_study/src/alpha.py 7_paper_acceptance_model/executables/BertClassification_for_tobert.py 7_paper_acceptance_model/executables/p_eval.py 7_paper_acceptance_model/executables/tobert_utils.py 4_post_processing/executables/add_manual_decisions.py 2_clean_data/executables/clean_dataset_midl20.py 5_model_works/token_level/executables/t_predict.py 7_paper_acceptance_model/executables/p_modelprocessing.py 3_annotation_study/executables/agreement.py 3_annotation_study/executables/alpha.py 5_model_works/sentence_level/executables/s_train.py 2_clean_data/executables/clean_dataset_graph20.py 7_paper_acceptance_model/executables/p_preprocess.py 5_model_works/token_level/executables/t_data_preprocess.py 5_model_works/token_level/executables/t_processors.py 5_model_works/token_level/executables/t_modelprocessing.py 7_paper_acceptance_model/executables/p_processors.py 6_arg_extraction/executables/extract_topk.py 7_paper_acceptance_model/executables/ToBERT.py 6_arg_extraction/executables/umap_arguments.py download clean_data remove_markdown find_urls remove_dollar remove_url encode_decode clean_data_iclr remove_escape_sequences dataset_prep dataset_prep dataset_prep dataset_prep dataset_prep dataset_prep Agreement Annotation intersect Unit get_assignment get_units write_annotation write_review_length Annotation intersect Unit get_sentence_position check_length get_next_change get_majority remove_leading_characters get_decisions get_segments_in_sentences iterate_over_all_reviews get_all_lengths get_segments get_sentence_ids remove_too_short_segments run_algorithm_on_one_review get_sentences get_segments_in_one_sentence n_of_tokens get_majorities only_pos_neg_in_prediction only_pos_neg_in_ground_truth print_statistics get_only_reviews_from only_argument_detection get_individual_positions only_pos_neg_in_prediction create_sen_one_line only_pos_neg_in_ground_truth split_review_sentence only_argument_detection get_individual_positions predict DataPrecessForSingleSentence gen_dataloader DataPrecessForSingleSentence EarlyStopping test gen_dataloader train predict read_labeled_reviews merge_review_scores merge_aurc_scores construct_reviews calculate_sen_score average_scores merge_hidden split_conference add_rating_and_decision draw_graph merge_confidence create_token_per_line_data create_evaluate_data create_sen_per_line_data split_train_dev_test_topic split_sentence split_train_dev_test_ratio split_review split_rev_training check_max_length_review split_aurc get_splitting_info write_scores calculate_accuracy calculate_f1 predict_test_data write_report main data_preparer EarlyStopping Trainer Predictor run_predict read_file BertTLC StanceProcessor InputFeatures InputExample RecogProcessor convert_examples_to_features convert_topic_examples_to_features DataProcessor ClassifyProcessor topk t_topk get_labeled_review get_complete_id load_reviews_data InputFeatures predict_review_using_bert preprocess_data_for_bert predict_papers_using_bert write_scores calculate_accuracy calculate_f1 predict_test_data write_report main data_preparer EarlyStopping split_train_dev_test InputFeatures convert_examples_to_features DataProcessor InputExample evaluate test ToBERT_Encoder main train MyDataset EarlyStopping prepare_data_for_tobert load_states_data epoch_time iterget_notes readers details outdir id ddate exists defaultdict getcwd tmdate append conference_name chdir content cdate tcdate print writers makedirs index find index find find_urls replace findall join reshape astype to_csv dirname DataFrame len print defaultdict tolist read_excel append items list print to_csv dict append DataFrame print to_csv dict append DataFrame range len literal_eval split count range len begin end tag empty range T unique len get_majority empty get_next_change concatenate empty array len list asarray concatenate PunktParameters set PunktSentenceTokenizer span_tokenize tokenize int isalnum remove_leading_characters get_segments int str get_segments_in_one_sentence concatenate empty len word_tokenize remove delete str empty range get_decisions get_segments_in_sentences get_all_lengths len get_sentence_ids close remove_too_short_segments read_from_file get_sentences savetxt open raters get_majorities fill_gap run_algorithm_on_one_review read_csv begin get_all_lengths end len tag get_sentence_position get_segments_in_one_sentence find append to_numpy empty array range fill_gap append to_numpy append pop copy pop copy only_pos_neg_in_prediction only_pos_neg_in_ground_truth print get_only_reviews_from mean only_argument_detection append f1_score accuracy_score get_individual_positions print eval strip append pop join list items strip tokenizer extend range len create_sen_one_line split_review_sentence extend get_input DataPrecessForSingleSentence DataLoader TensorDataset tensor read_csv from_pretrained concatenate print to_csv eval softmax gen_dataloader save to array read_csv concatenate print flatten eval f1_score float sum array len from_pretrained save_model model get_linear_schedule_with_warmup tuple clip_grad_norm_ zero_grad finetuned gen_dataloader list early_stopping step append to range patience test eval save_pretrained item outdir_model enumerate join learning_rate backward AdamW print EarlyStopping tqdm average parameters epochs early_stop model prediction_file finetuned representation_file open writer range outdir_model join writerow len append join range join print round read_csv join xlabel title lineplot ylim savefig read_csv read_csv join read_labeled_reviews iterrows print group to_csv set add dict match read_csv append DataFrame range len int join iterrows print group to_csv dict match append DataFrame read_csv print to_csv append DataFrame read_csv print load save to_csv tqdm eval append sum read_csv len groupby list join iterrows print to_csv append keys read_csv iterrows read_csv print sort insert eval append range len iterrows eval read_csv iterrows print strip eval append read_csv join list items strip extend tokenizer to_csv append DataFrame len iterrows print DataFrame strip tokenizer to_csv append range read_csv len iterrows print eval append read_csv iterrows print range eval append get_splitting_info read_csv iterrows print eval append train_test_split range read_csv len print iterrows read_csv len join int str reset_index arange to_csv copy read_csv makedirs format print log_softmax len tqdm eval numpy info append to argmax enumerate info info append info classification_report tensor TensorDataset convert_examples_to_features convert_topic_examples_to_features gradient_accumulation_steps from_pretrained models do_eval write_scores get_train_examples model get_linear_schedule_with_warmup tuple clip_grad_norm_ zero_grad use_topic DataParallel DistributedDataParallel DataLoader ArgumentParser device output_dir do_train max_grad_norm predict_test_data set_weights seed eval_batch_size list open early_stopping max_seq_length data_dir set_device len DistributedSampler get_labels device_count parse_args to SequentialSampler dump format init_process_group mean lower eval save_pretrained num_train_epochs info manual_seed trange data_preparer task_name train enumerate use_weight int warmup_proportion join write_report backward AdamW EarlyStopping add_argument named_parameters RandomSampler get_dev_examples parameters tqdm get_test_examples bool step local_rank train_batch_size early_stop makedirs list read_csv zip save_attention predict tqdm save_confidence save_hidden read_file Predictor array append join English insert text convert_tokens_to_ids InputFeatures extend tokenizer1 enumerate tokenize guid info label create_tokenizer text_a range append len guid text_b English convert_tokens_to_ids tokenizer1 append create_tokenizer text_a range insert InputFeatures info label tokenize enumerate join text extend len groupby join sort text extend tokenizer eval save zip append range read_csv len join iterrows add set match read_csv groupby join sort tolist save append get_labeled_review read_csv cat copy apply load DataFrame append column_stack insert InputFeatures convert_tokens_to_ids append tokenize len append tensor numpy from_pretrained load_reviews_data extend drop predict_review_using_bert save append to_numpy to range preprocess_data_for_bert len concatenate flatten DataProcessor print load defaultdict print extend to_csv append train_test_split DataFrame len isinstance print reshape stack loss_function zeros forward eval SGD ToBERT_Encoder DataFrame str load_state_dict range set_class_weights test load time evaluate to_csv prepare_data_for_tobert load_states_data epoch_time predict_papers_using_bert load DataFrame train_test_split reset_index drop int | # Argument Mining in Scientific Reviews (AMSR) Accompanying repository of our [AAAI2021](https://ojs.aaai.org/index.php/AAAI/article/view/16607) paper "Argument Mining Driven Analysis of Peer-Reviews". We release a new dataset of peer-reviews from different computer science conferences with annotated arguments, called AMSR (**A**rgument **M**ining in **S**cientific **R**eviews). ## Dataset The dataset is available [here](https://zenodo.org/record/4314390). It contains: 1. Raw Conference Data 2. Cleaned Conference Data 3. Annotated Conference Data ## Requirements | 2,134 |
frotms/PaddleOCR2Pytorch | ['optical character recognition'] | ['PP-OCR: A Practical Ultra Lightweight OCR System', 'PP-OCRv2: Bag of Tricks for Ultra Lightweight OCR System'] | pytorchocr/utils/e2e_utils/pgnet_pp_utils.py tools/infer/predict_system.py pytorchocr/modeling/necks/__init__.py pytorchocr/modeling/necks/east_fpn.py converter/ch_ppocr_server_v2.0_det_converter.py pytorchocr/modeling/common.py pytorchocr/modeling/backbones/rec_resnet_vd.py pytorchocr/modeling/heads/rec_att_head.py misc/rec_srn_head.py converter/multilingual_mobile_v2.0_rec_converter.py pytorchocr/utils/e2e_utils/visual.py pytorchocr/modeling/backbones/det_resnet_vd_sast.py tools/infer/pytorchocr_utility.py pytorchocr/modeling/heads/det_db_head.py pytorchocr/utils/e2e_utils/extract_textpoint_fast.py pytorchocr/modeling/heads/self_attention.py tools/infer/predict_det.py tools/infer/predict_e2e.py pytorchocr/base_ocr_v20.py converter/ch_ppocr_mobile_v2.0_cls_converter.py misc/lstm.py pytorchocr/modeling/heads/det_east_head.py misc/hs.py configs/rec/multi_language/generate_multi_language_configs.py pytorchocr/data/imaug/operators.py pytorchocr/modeling/backbones/rec_mobilenet_v3.py pytorchocr/utils/utility.py pytorchocr/modeling/backbones/e2e_resnet_vd_pg.py misc/pt_self_attention.py tools/infer/predict_cls.py misc/layernorm.py pytorchocr/modeling/backbones/rec_resnet_fpn.py converter/ch_ppocr_mobile_v2.0_rec_converter.py converter/ch_ppocr_mobile_v2.0_det_converter.py misc/pp_ocr.py misc/gru_cell.py pytorchocr/modeling/backbones/__init__.py pytorchocr/modeling/heads/rec_ctc_head.py pytorchocr/modeling/backbones/det_resnet_vd.py misc/pp_self_attention.py pytorchocr/modeling/architectures/__init__.py pytorchocr/modeling/heads/det_sast_head.py pytorchocr/data/__init__.py pytorchocr/modeling/heads/e2e_pg_head.py pytorchocr/modeling/transforms/__init__.py pytorchocr/modeling/heads/rec_srn_head.py converter/srn_converter.py misc/rec_srn.py pytorchocr/data/imaug/__init__.py pytorchocr/postprocess/rec_postprocess.py converter/ch_ppocr_server_v2.0_rec_converter.py misc/diff.py tools/infer/predict_rec.py misc/fc.py pytorchocr/utils/e2e_utils/extract_batchsize.py misc/pp_rec_resnet_fpn.py pytorchocr/postprocess/pg_postprocess.py misc/pt_rec_srn_head.py converter/det_converter.py pytorchocr/modeling/heads/__init__.py converter/rec_converter.py pytorchocr/modeling/architectures/base_model.py pytorchocr/modeling/necks/pg_fpn.py misc/pt_rec_resnet_fpn.py pytorchocr/modeling/necks/rnn.py converter/ch_ppocr_v2_rec_converter.py misc/conv.py pytorchocr/postprocess/__init__.py pytorchocr/modeling/backbones/det_mobilenet_v3.py pytorchocr/modeling/transforms/tps.py misc/common.py converter/e2e_converter.py misc/pp_rec_srn_head.py pytorchocr/modeling/necks/sast_fpn.py misc/attention_grucell.py pytorchocr/postprocess/sast_postprocess.py pytorchocr/postprocess/east_postprocess.py pytorchocr/postprocess/cls_postprocess.py converter/ch_ppocr_v2_det_converter.py pytorchocr/modeling/backbones/rec_mv1_enhance.py misc/attention_head.py pytorchocr/modeling/heads/cls_head.py pytorchocr/postprocess/locality_aware_nms.py misc/rec_resnet_fpn.py pytorchocr/utils/e2e_utils/extract_textpoint_slow.py pytorchocr/postprocess/db_postprocess.py pytorchocr/modeling/necks/db_fpn.py merge_config loss_file ArgsParser MobileV20DetConverter MobileV20DetConverter ServerV20RecConverter ServerV20DetConverter ServerV20RecConverter PPOCRv2DetConverter PPOCRv2RecConverter DetV20DetConverter read_network_config_from_yaml E2EV20DetConverter read_network_config_from_yaml MultilingualV20RecConverter RecV20RecConverter read_network_config_from_yaml RecV20RecConverter read_network_config_from_yaml PTAttentionGRUCell PPAttentionGRUCell paddle_grucell torch_grucell PTAttentionGRUCell PPAttentionGRUCell paddle_grucell torch_grucell PPAttentionHead PTAttentionHead Hsigmoid Hswish Activation paddle_conv torch_conv print_cmp paddle_fc torch_fc paddle_grucell get_para_bias_attr torch_grucell paddle_hs Hsigmoid torch_hs paddle_func torch_func torch_lstm_m paddle_lstm get_para_bias_attr torch_lstm PaddlePaddleOCR ConvBNLayer ResNetFPN BottleneckBlock ShortCut BasicBlock GSRM PVAM VSFD SRNHead PrepareEncoder PrepareDecoder MultiHeadAttention PrePostProcessLayer Encoder FFN WrapEncoder WrapEncoderForFeature EncoderLayer ConvBNLayer ResNetFPN BottleneckBlock ShortCut BasicBlock GSRM SRNHead Lambda VSFD PVAM PrepareEncoder PrepareDecoder MultiHeadAttention Lambda PrePostProcessLayer Encoder FFN WrapEncoder WrapEncoderForFeature LambdaXY EncoderLayer paddle_func torch_func paddle_func torch_func paddle_func torch_func print_cmp BaseOCRV20 E2EResizeForTest NormalizeImage DetResizeForTest ToCHWImage DecodeImage KeepKeys transform create_operators Hsigmoid Hswish Activation BaseModel build_model MobileNetV3 make_divisible ConvBNLayer ResidualUnit SEModule ConvBNLayer ResNet BasicBlock BottleneckBlock ConvBNLayer BasicBlock ResNet_SAST BottleneckBlock ConvBNLayer ResNet BasicBlock BottleneckBlock MobileNetV3 ConvBNLayer DepthwiseSeparable hardsigmoid MobileNetV1Enhance SEModule ConvBNLayer ResNetFPN BottleneckBlock ShortCut BasicBlock ConvBNLayer ResNet BasicBlock BottleneckBlock build_backbone ClsHead DBHead Head ConvBNLayer EASTHead ConvBNLayer SASTHead SAST_Header2 SAST_Header1 ConvBNLayer PGHead AttentionLSTM AttentionHead AttentionGRUCell AttentionLSTMCell CTCHead GSRM SRNHead Lambda VSFD PVAM PrepareEncoder PrepareDecoder MultiHeadAttention Lambda PrePostProcessLayer Encoder FFN WrapEncoder WrapEncoderForFeature LambdaXY EncoderLayer build_head DBFPN ConvBNLayer EASTFPN DeConvBNLayer ConvBNLayer PGFPN DeConvBNLayer EncoderWithFC Im2Seq EncoderWithRNN_ SequenceEncoder EncoderWithRNN DeConvBNLayer ConvBNLayer FPN_Down_Fusion SASTFPN FPN_Up_Fusion Cross_Attention build_neck ConvBNLayer GridGenerator TPS LocalizationNetwork build_transform ClsPostProcess DBPostProcess EASTPostProcess intersection_iog standard_nms nms standard_nms_inds nms_locality intersection soft_nms weighted_merge PGPostProcess AttnLabelDecode SRNLabelDecode CTCLabelDecode BaseRecLabelDecode SASTPostProcess build_post_process get_image_file_list check_and_read_gif org_tcl_rois pre_process sort_with_direction sort_and_expand_with_direction_v2 generate_pivot_list_fast remove_blank restore_poly get_keep_pos_idxs ctc_greedy_decoder point_pair2poly instance_ctc_greedy_decoder sort_and_expand_with_direction shrink_quad_along_width ctc_decoder_for_image extract_main_direction add_id softmax expand_poly_along_width insert_blank sort_by_direction_with_image_id sort_by_direction_with_image_id_deprecated get_dict sort_with_direction sort_and_expand_with_direction_v2 remove_blank generate_pivot_list_horizontal get_keep_pos_idxs ctc_greedy_decoder point_pair2poly instance_ctc_greedy_decoder sort_and_expand_with_direction shrink_quad_along_width ctc_decoder_for_image extract_main_direction add_id softmax expand_poly_along_width generate_pivot_list_slow generate_pivot_list_tt_inference insert_blank sort_by_direction_with_image_id sort_by_direction_with_image_id_deprecated generate_pivot_list_curved get_dict PGNet_PostProcess norm2 point_pair2poly resize_image_for_totaltext cos shrink_quad_along_width resize_image_min resize_image expand_poly_along_width main TextClassifier TextDetector TextE2E main TextRecognizer main sorted_boxes TextSystem resize_img str_count draw_text_det_res draw_boxes AnalysisConfig get_default_config base64_to_cv2 parse_args draw_ocr_box_txt draw_e2e_res read_network_config_from_yaml text_visual update items list split enumerate seed astype float32 seed load PTAttentionGRUCell T list items print endswith tolist astype float32 copy_ Tensor layer PTAttentionHead seed transpose astype float32 resize expand_dims imread load transpose astype float32 tfc Conv2d copy_ resize Tensor expand_dims imread format print min mean shape sum max seed astype float32 seed load T print astype float32 tfc copy_ Tensor Linear sqrt ParamAttr L2Decay Uniform GRUCell seed astype float32 seed Tensor astype float32 seed print astype float32 save seed load T list items replace print endswith tolist LayerNorm copy_ Tensor numpy layer seed astype float32 seed load list items LSTM lstm tolist astype float32 copy_ Tensor print items list EncoderWithRNN format shape PTBackbone format shape astype float32 load PTHead op update append BaseModel deepcopy int max pop pop print pop pop reshape area buffer Polygon print reshape area Polygon append array append array append array exp arange copy intersection range append weighted_merge pop deepcopy update append join listdir isdir COLOR_GRAY2RGB VideoCapture read print cvtColor pop int deepcopy extend copy append range len squeeze shape any cast append org_tcl_rois numpy to_tensor range sum exp max int list groupby append sum len remove_blank argmax array get_keep_pos_idxs list shape zip argmax len append join instance_ctc_greedy_decoder reshape sort_part_with_direction len append int sort_with_direction norm tolist mean shape array append max range len int sort_with_direction norm tolist mean shape array append max range len enumerate len array norm shrink_quad_along_width array point_pair2poly format print reshape exit zip append array clip expand_poly_along_width len connectedComponents uint8 list sort_and_expand_with_direction_v2 transpose astype where zip append thin range ctc_decoder_for_image mean norm array mean reshape sum tolist reshape sort_part_with_direction len norm append array softmax ctc_greedy_decoder connectedComponents uint8 list sort_with_direction sort_and_expand_with_direction_v2 transpose astype extend where zip append thin add_id range ctc_decoder_for_image sort_with_direction where max list transpose ctc_decoder_for_image append extract_main_direction add_id range connectedComponents astype mean unique zip int uint8 reshape min extend array connectedComponents uint8 list sort_and_expand_with_direction_v2 transpose astype where zip append thin add_id range shape float int resize shape float int resize shape float int resize format check_and_read_gif print TextClassifier image_dir text_classifier get_image_file_list imread range append len TextRecognizer text_recognizer list sorted range imwrite draw_ocr_box_txt fromarray basename text_sys COLOR_BGR2RGB vis_font_path drop_score join time TextSystem cvtColor makedirs add_argument ArgumentParser lower basename polylines reshape putText zip imread polylines reshape imread max shape resize float array seed int truetype Draw text new copy blend sqrt paste zip polygon getsize max enumerate isalpha len str truetype concatenate text create_blank_img append array enumerate uint8 b64decode fromstring IMREAD_COLOR imdecode encode polylines astype int64 zip array len | # [PaddleOCR2Pytorch](https://github.com/frotms/PaddleOCR2Pytorch) 简体中文 | [English](README_en.md) ## 简介 **”白嫖“**[PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)。 本项目旨在: - 学习PaddleOCR - 让PaddleOCR训练的模型在pytorch上使用 - 为paddle转pytorch提供参考 ## TODO - [ ] 文本识别:[ABINet](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_ch/algorithm_rec_abinet.md), [VisionLAN](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_ch/algorithm_rec_visionlan.md), [SPIN](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_ch/algorithm_rec_spin.md), [RobustScanner](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_ch/algorithm_rec_robustscanner.md) | 2,135 |
fshp971/RPG | ['generalization bounds'] | ['Robustness, Privacy, and Generalization of Adversarial Training'] | train.py utils/data.py utils/__init__.py eval.py utils/datasets/torchvision_datasets.py models/__init__.py utils/generic.py utils/argument.py utils/datasets/__init__.py utils/attack.py models/resnet.py eval_grad.py main evaluate main get_grad_noise get_grad_mu main save_checkpoint get_grad_norm WRN ResNet Bottleneck WRN34_10 BasicBlock WRN50_2 get_args PGDAttacker membership_inference_attack TrainLoader EvalLoader get_arch get_loader AverageMeter add_log CIFAR100 CIFAR10 update perturb AverageMeter item len load resume_path evaluate batch_size get_arch PGDAttacker dict load_state_dict membership_inference_attack get_loader cpu dataset cuda CrossEntropyLoss criterion model perturb backward cpu zero_grad parameters zip append train array len permutation criterion perturb model backward reshape concatenate zero_grad parameters zip append train next range len get_grad_noise get_grad_mu grad_rep_T grad_samp_T format save makedirs criterion model reshape grad sqrt eval zip append max lr_decay_rate perturb model zero_grad SGD get_grad_norm save_checkpoint save_dir train_steps Adam add_log next range format param_groups sqrt resume lr info item lr_decay_freq criterion backward parameters train step len add_argument ArgumentParser max Softmax eval append array range len append Compose TrainLoader Normalize CIFAR10 CIFAR100 EvalLoader | # Robustness, Privacy, and Generalization of Adversarial Training This repository contains the PyTorch source code for technical report, "Robustness, Privacy, and Generalization of Adversarial Training," by Fengxiang He, Shaopeng Fu, Bohan Wang, and Dacheng Tao. ## Code Structures ``` |----./ |---- models/ |---- __init__.py |---- resnet.py |---- utils/ |---- datasets/ | 2,136 |
ftheberge/Graph_Aware_Measures | ['graph clustering'] | ['Ensemble Clustering for Graphs'] | partition_igraph.py partition_networkx.py gam community_ecg gam community_ecg print sum array len shell_index modularity ecount permute_vertices tolist len membership community_multilevel sum range community_leiden set_edge_attributes list core_number partition_at_level namedtuple P min generate_dendrogram edges best_partition values | # Graph Partition and Measures Python3 code implementing 11 graph-aware measures (gam) for comparing graph partitions as well as a stable ensemble-based graph partition algorithm (ecg). This code is pip installable for both igraph and networkx: * PyPI (igraph): https://pypi.org/project/partition-igraph/ * PyPI (networkx): https://pypi.org/project/partition-networkx/ ## Graph aware measures (gam) The measures are respectively: * 'rand': the RAND index * 'jaccard': the Jaccard index * 'mn': pairwise similarity normalized with the mean function | 2,137 |
fubel/stmodeling | ['action recognition', 'human object interaction detection'] | ['Comparative Analysis of CNN-based Spatiotemporal Reasoning in Videos'] | datasets_video.py thop/utils.py modules/DNDFmodule.py modules/Transformermodule.py opts.py thop/__init__.py modules/MLPmodule.py process_dataset.py modules/FCN2Dmodule.py extract_frames.py ops/__init__.py dataloader.py dataset.py process_sth_v2_dataset.py modules/FCN3Dmodule.py modules/CONVLSTMmodule.py thop/count_hooks.py modules/TSNmodule.py modules/TRNmodule.py main.py ops/plot_utils.py ops/basic_ops.py count_flops.py ops/utils.py models.py modules/RNNmodule.py transforms.py TSN TSNDataSet VideoRecord return_somethingv2 return_jester return_dataset extract target split validate count_parameters AverageMeter check_rootfolders accuracy save_checkpoint adjust_learning_rate main train TSN Stack GroupSpatialElasticDisplacement IdentityTransform GroupCenterCrop GroupNormalize GroupOverSample GroupMultiScaleResize ToTorchFormatTensor GroupMultiScaleRotate GroupScale GroupRandomSizedCrop GroupRandomHorizontalFlip GroupRandomCrop GroupMultiScaleCrop return_CONVLSTM ConvLSTMCell ConvLSTM Forest NeuralDecisionForest return_DNDF Tree FCN2Dmodule return_FCN2D FCN3Dmodule return_FCN3D MLPmodule return_MLP RNNmodule return_RNN Transformermodule return_Transformer return_TRN RelationModuleMultiScaleWithClassifier RelationModuleMultiScale RelationModule TSNmodule return_TSN ConsensusModule SegmentConsensus Identity plot_loss plot_accuracy plot_statistics softmax class_accuracy log_add get_grad_hook count_maxpool count_softmax count_avgpool count_tanh count_relu count_layer_norm count_conv2d count_linear count_conv3d count_bn2d profile print join exit print join range len system extract join makedirs validate train_list store_name IdentityTransform SGD root_path DataLoader adjust_learning_rate root_log save_checkpoint input_std dataset cuda max get_augmentation open plot_statistics scale_size lr_steps GroupNormalize Adam crop_size load_state_dict append parse_args modality range format TSNDataSet start_epoch input_mean num_motion resume val_list lr get_optim_policies load join return_dataset evaluate print check_rootfolders TSN isfile train epochs len data model clip_grad_norm_ zero_grad cuda log clip_gradient update format size item partialBN flush enumerate time no_partialbn criterion backward Variable print AverageMeter write accuracy parameters step len update time format print size AverageMeter write eval item cuda enumerate flush len copyfile save param_groups weight_decay sum lr topk size t eq mul_ expand_as append sum max print mkdir ConvLSTM Forest NeuralDecisionForest FCN2Dmodule FCN3Dmodule MLPmodule RNNmodule Transformermodule RelationModuleMultiScale RelationModule TSNmodule format plot xlabel ylabel title ylim savefig figure legend xlim format plot xlabel ylabel title savefig figure legend plot_loss plot_accuracy exp astype confusion_matrix mean sum diag kernel_size size out_channels groups in_channels kernel_size size out_channels groups in_channels numel numel size Tensor prod numel Tensor prod numel in_features numel numel numel model apply eval modules item cuda | <div align="center"> # CNN-based Spatio-Temporal Modeling [](https://arxiv.org/abs/1909.05165) / [Video](https://youtu.be/MaH1LbzcWMU) </div> Pytorch implementation for the paper ["Comparative Analysis of CNN-based Spatiotemporal Reasoning in Videos"](https://arxiv.org/pdf/1909.05165.pdf). In this work, different **'Spatiotemporal Modeling Blocks'** are analyzed for the architecture illustrated at the above below. <p align="center"><img src="https://github.com/fubel/stmodeling/blob/master/ops/STM_arch.jpg" align="middle" width="375" title="Motion Fused Frames" /></p> **Maintainers:** [Okan Köpüklü](https://github.com/okankop) and [Fabian Herzog](https://github.com/fubel) The structure was inspired by the project [TRN-pytorch](https://github.com/metalbubble/TRN-pytorch) ## Results and Pretrained Models The pretrained models can be found in our [Google Drive](https://drive.google.com/drive/folders/13x6ClKowbfPLf4RgA7ITt4mVEqtReqWI?usp=sharing). | 2,138 |
funkey/mala | ['semantic segmentation'] | ['Large Scale Image Segmentation with Structured Loss based Deep Learning for Connectome Reconstruction'] | scripts/train_until.py mala/losses/py_func_gradient.py scripts/mknet.py mala/gunpowder/add_local_shape_descriptor.py mala/__init__.py mala/networks/__init__.py mala/networks/unet.py mala/gunpowder/__init__.py mala/losses/__init__.py mala/losses/um_loss.py setup.py scripts/predict_affinities.py mala/losses/mask_loss.py build_ext AddLocalShapeDescriptor mask_loss_op mask_loss save_div aggregate py_func_gradient get_um_loss_gradient_op get_emst get_emst_op get_um_loss_gradient ultrametric_loss_op get_um_loss upsample downsample crop_zyx conv_pass unet predict_affinities train_until transpose convolution reduce_sum aggregate as_list constant reshape square reduce_sum mask_loss save_div range str get_default_graph float64 min astype emst info max um_loss um_loss arange tuple concat boolean_mask warn gather max py_func multiply transpose get_emst_op reduce_sum int64 cast meshgrid range subtract square sqrt as_list constant reshape float32 maximum len isinstance tuple conv3d getattr nn enumerate len tuple max_pooling3d tuple getattr nn conv3d_transpose as_list slice as_list str isinstance print upsample concat downsample shape crop_zyx conv_pass max len Snapshot join Hdf5Source Predict Pad IntensityScaleShift PrintProfilingStats BatchRequest add RAW Normalize register_volume_type Chunk ZeroOutConstSections Coordinate PRED_AFFINITIES PrintProfilingStats RandomLocation tuple mknhood3d IntensityAugment Train GrowBoundary ElasticAugment BalanceLabels IntensityScaleShift BatchRequest add SplitAndRenumberSegmentationLabels register_volume_type GT_MASK ZeroOutConstSections Coordinate SimpleAugment Snapshot GT_LABELS PreCache DefectAugment RandomProvider Normalize join Hdf5Source print GT_SCALE RAW AddGtAffinities GT_AFFINITIES | MALA ==== Training and evaluation scripts for MALA (https://arxiv.org/abs/1709.02974). | 2,139 |
furkanbiten/SelectiveTextStyleTransfer | ['scene text detection', 'data augmentation', 'style transfer'] | ['Selective Style Transfer for Text'] | twoStage/weighted_blending.py twoStage/magenta/vgg.py twoStage/magenta/image_utils.py twoStage/magenta/ops.py twoStage/magenta/imagenet_data.py twoStage/get_TextFCN_heatmaps.py twoStage/magenta/model.py twoStage/style_images.py twoStage/magenta/image_stylization_transform.py ImagenetData _describe_style _multiple_images _multiple_styles _style_mixture multiple_input_images _load_checkpoint style_from_camera console_entry_point main _parse_example_proto load_evaluation_images center_crop_resize_image save_np_image _aspect_preserving_resize load_np_image COCOText_inputs resize_image _crop imagenet_inputs _central_crop _smallest_size_at_least load_np_image_uint8 load_image form_image_grid style_image_inputs arbitrary_style_image_inputs transform conv2d residual_block upsampling weighted_instance_norm conditional_instance_norm conditional_style_norm vgg_16 checkpoint_file str restore format global_variables latest_checkpoint print IsDirectory Saver info expanduser append sorted keys zeros VideoCapture VideoWriter isinstance load_np_image which_styles print _multiple_images literal_eval _multiple_styles input_image output_dir expanduser expand_dims makedirs run ImagenetData uint8 BytesIO seek GFile squeeze write close getvalue imsave to_float resize_images uint8 constant value load_np_image resize_image_with_crop_or_pad min join Glob get_data_files_path reshape transpose greater_equal to_int32 logical_and Assert shape stack rank equal append _crop convert_to_tensor to_float to_int32 greater cond convert_to_tensor get_shape resize_bilinear squeeze shape set_shape _smallest_size_at_least expand_dims len update concat transpose cast VarLenFeature parse_single_example expand_dims values minimum to_float resize_images constant resize_image_with_crop_or_pad shape to_float _aspect_preserving_resize pad | furkanbiten/SelectiveTextStyleTransfer | 2,140 |
fuzihaofzh/repetition-problem-nlg | ['text generation'] | ['A Theoretical Analysis of the Repetition Problem in Text Generation'] | tools/fairseq/fairseq_cli/validate.py tools/fairseq/fairseq_cli/train.py tools/fairseq/examples/backtranslation/extract_bt_data.py tools/fairseq/fairseq/modules/character_token_embedder.py tools/fairseq/examples/simultaneous_translation/__init__.py tools/e2e-metrics/pycocotools/__init__.py tools/fairseq/examples/simultaneous_translation/eval/agents/__init__.py tools/fairseq/examples/simultaneous_translation/eval/agents/word_splitter.py tools/fairseq/examples/noisychannel/rerank_generate.py tools/fairseq/scripts/rm_pt.py tools/fairseq/tests/test_reproducibility.py tools/fairseq/fairseq/data/multilingual/sampling_method.py tools/fairseq/scripts/average_checkpoints.py tools/fairseq/fairseq/optim/fp16_optimizer.py tools/e2e-metrics/pycocoevalcap/cider/cider.py tools/fairseq/fairseq/modules/quantization/pq/modules/qlinear.py tools/fairseq/fairseq/modules/quantization/pq/__init__.py tools/fairseq/fairseq/modules/transformer_sentence_encoder_layer.py tools/fairseq/fairseq/models/composite_encoder.py tools/fairseq/fairseq/models/huggingface/hf_gpt2.py tools/fairseq/tests/speech_recognition/test_cross_entropy.py tools/fairseq/fairseq/modules/quantization/scalar/modules/__init__.py tools/fairseq/fairseq/tasks/sentence_ranking.py tools/fairseq/fairseq/modules/quantization/scalar/modules/qemb.py tools/fairseq/examples/speech_recognition/w2l_decoder.py tools/fairseq/tests/test_multi_corpus_sampled_dataset.py tools/fairseq/fairseq/modules/__init__.py tools/fairseq/fairseq/optim/lr_scheduler/tri_stage_lr_scheduler.py tools/fairseq/fairseq/model_parallel/modules/multihead_attention.py tools/e2e-metrics/pycocoevalcap/tokenizer/ptbtokenizer.py tools/fairseq/scripts/spm_encode.py tools/fairseq/fairseq/modules/lightconv_layer/lightconv_layer.py tools/fairseq/examples/simultaneous_translation/utils/latency.py tools/fairseq/fairseq/model_parallel/modules/__init__.py tools/fairseq/fairseq_cli/eval_lm.py tools/fairseq/fairseq/hub_utils.py tools/fairseq/fairseq/models/nat/nat_crf_transformer.py tools/fairseq/fairseq/modules/lightconv_layer/__init__.py tools/fairseq/fairseq/models/bart/model.py tools/fairseq/fairseq/models/nat/iterative_nonautoregressive_transformer.py tools/fairseq/fairseq/modules/fp32_group_norm.py tools/fairseq/examples/simultaneous_translation/eval/scorers/scorer.py tools/fairseq/fairseq/tasks/sentence_prediction.py tools/fairseq/scripts/read_binarized.py tools/fairseq/fairseq/data/multi_corpus_sampled_dataset.py tools/fairseq/tests/test_file_io.py tools/fairseq/examples/speech_recognition/models/__init__.py tools/fairseq/fairseq/model_parallel/megatron_trainer.py tools/fairseq/fairseq/modules/kmeans_vector_quantizer.py tools/fairseq/fairseq/benchmark/dummy_masked_lm.py tools/fairseq/fairseq/modules/transformer_sentence_encoder.py tools/fairseq/fairseq/bleu.py tools/fairseq/fairseq/models/roberta/model_camembert.py tools/fairseq/tests/utils.py tools/e2e-metrics/pycocoevalcap/__init__.py tools/fairseq/fairseq/data/multilingual/sampled_multi_dataset.py tools/fairseq/fairseq/optim/bmuf.py tools/fairseq/fairseq/tasks/multilingual_masked_lm.py tools/fairseq/fairseq/data/numel_dataset.py tools/fairseq/fairseq/modules/adaptive_input.py tools/fairseq/fairseq/optim/lr_scheduler/triangular_lr_scheduler.py tools/fairseq/fairseq/data/round_robin_zip_datasets.py tools/fairseq/scripts/build_sym_alignment.py tools/fairseq/fairseq/models/fairseq_incremental_decoder.py tools/fairseq/fairseq/models/distributed_fairseq_model.py tools/fairseq/fairseq/models/lightconv_lm.py tools/fairseq/fairseq/tasks/translation.py tools/fairseq/fairseq/modules/conv_tbc.py tools/e2e-metrics/pycocotools/coco.py tools/fairseq/examples/simultaneous_translation/eval/agents/simul_trans_text_agent.py tools/fairseq/fairseq/model_parallel/models/transformer.py tools/fairseq/fairseq/tasks/translation_lev.py tools/fairseq/examples/simultaneous_translation/utils/functions.py tools/fairseq/fairseq/models/masked_lm.py tools/fairseq/examples/simultaneous_translation/eval/agents/agent.py tools/fairseq/fairseq/data/encoders/byte_bpe.py tools/fairseq/fairseq/data/replace_dataset.py tools/fairseq/fairseq/data/__init__.py tools/fairseq/fairseq/model_parallel/modules/transformer_sentence_encoder.py tools/fairseq/fairseq/sequence_scorer.py tools/fairseq/fairseq/data/encoders/subword_nmt_bpe.py tools/e2e-metrics/pycocoevalcap/meteor/__init__.py tools/fairseq/examples/speech_recognition/infer.py tools/fairseq/examples/noisychannel/rerank_tune.py tools/fairseq/fairseq/models/nat/nonautoregressive_ensembles.py tools/fairseq/examples/backtranslation/deduplicate_lines.py tools/fairseq/fairseq/models/roberta/alignment_utils.py tools/fairseq/examples/speech_recognition/criterions/ASG_loss.py tools/fairseq/fairseq/models/fconv_self_att.py tools/fairseq/fairseq/trainer.py tools/fairseq/fairseq/data/raw_label_dataset.py tools/fairseq/fairseq/modules/adaptive_softmax.py tools/fairseq/fairseq/models/model_utils.py tools/fairseq/fairseq/model_parallel/modules/transformer_layer.py tools/fairseq/examples/speech_recognition/datasets/asr_prep_json.py tools/fairseq/fairseq/modules/layer_drop.py tools/fairseq/fairseq/data/language_pair_dataset.py tools/fairseq/fairseq/tasks/masked_lm.py tools/fairseq/fairseq/data/legacy/masked_lm_dataset.py tools/fairseq/scripts/count_docs.py tools/fairseq/examples/speech_recognition/criterions/cross_entropy_acc.py tools/e2e-metrics/pycocoevalcap/rouge/rouge.py tools/fairseq/fairseq/modules/dynamicconv_layer/dynamicconv_layer.py tools/fairseq/fairseq/benchmark/dummy_lm.py tools/fairseq/fairseq/data/subsample_dataset.py tools/fairseq/fairseq/tasks/audio_pretraining.py tools/fairseq/fairseq/models/fairseq_decoder.py tools/e2e-metrics/pycocoevalcap/cider/cider_scorer.py tools/fairseq/fairseq/benchmark/dummy_mt.py tools/fairseq/fairseq/data/multilingual/sampled_multi_epoch_dataset.py tools/fairseq/fairseq/data/shorten_dataset.py tools/fairseq/fairseq/modules/sinusoidal_positional_embedding.py tools/fairseq/fairseq/modules/dynamicconv_layer/setup.py tools/fairseq/fairseq/models/roberta/model.py tools/fairseq/fairseq/tasks/language_modeling.py tools/fairseq/fairseq/file_utils.py tools/fairseq/examples/speech_recognition/models/w2l_conv_glu_enc.py tools/fairseq/fairseq/data/prepend_dataset.py tools/fairseq/examples/speech_recognition/models/vggtransformer.py tools/fairseq/fairseq/data/nested_dictionary_dataset.py tools/fairseq/examples/byte_level_bpe/get_bitext.py tools/fairseq/examples/simultaneous_translation/modules/monotonic_multihead_attention.py tools/fairseq/fairseq/data/roll_dataset.py tools/fairseq/fairseq/modules/quantization/scalar/__init__.py tools/fairseq/fairseq/modules/layer_norm.py tools/fairseq/fairseq/modules/learned_positional_embedding.py tools/fairseq/fairseq/models/nat/fairseq_nat_model.py tools/fairseq/fairseq_cli/preprocess.py tools/fairseq/fairseq/data/encoders/nltk_tokenizer.py tools/fairseq/fairseq/criterions/label_smoothed_cross_entropy_with_alignment.py tools/fairseq/tests/test_sequence_generator.py tools/fairseq/tests/test_lstm_jitable.py tools/fairseq/docs/conf.py tools/fairseq/fairseq/tasks/semisupervised_translation.py tools/fairseq/fairseq/data/transform_eos_dataset.py tools/fairseq/examples/speech_recognition/data/collaters.py tools/fairseq/fairseq/modules/grad_multiply.py tools/fairseq/tests/test_memory_efficient_fp16.py tools/fairseq/fairseq/data/colorize_dataset.py tools/fairseq/fairseq/models/lightconv.py tools/fairseq/tests/test_bmuf.py tools/fairseq/fairseq/modules/multihead_attention.py tools/fairseq/tests/test_sparse_multihead_attention.py tools/fairseq/fairseq/optim/lr_scheduler/polynomial_decay_schedule.py tools/fairseq/fairseq/models/nat/insertion_transformer.py tools/fairseq/fairseq/tasks/multilingual_translation.py tools/fairseq/fairseq/tasks/translation_multi_simple_epoch.py tools/fairseq/tests/speech_recognition/test_vggtransformer.py tools/fairseq/fairseq/data/list_dataset.py tools/fairseq/fairseq/optim/adafactor.py tools/fairseq/tests/gpu/test_binaries_gpu.py tools/fairseq/fairseq/modules/linearized_convolution.py tools/fairseq/examples/translation_moe/score.py tools/fairseq/tests/test_utils.py tools/fairseq/examples/roberta/commonsense_qa/commonsense_qa_task.py tools/fairseq/tests/test_convtbc.py tools/fairseq/fairseq/modules/transformer_layer.py tools/fairseq/fairseq/models/__init__.py tools/fairseq/fairseq/modules/gelu.py tools/fairseq/fairseq/models/transformer.py tools/fairseq/tests/test_resampling_dataset.py tools/fairseq/fairseq/models/lstm_lm.py tools/fairseq/fairseq/models/roberta/model_xlmr.py tools/fairseq/examples/speech_recognition/data/__init__.py tools/fairseq/fairseq/data/prepend_token_dataset.py tools/fairseq/examples/speech_recognition/data/asr_dataset.py tools/fairseq/tests/test_noising.py tools/fairseq/fairseq/model_parallel/criterions/vocab_parallel_cross_entropy.py tools/fairseq/fairseq/tasks/fairseq_task.py tools/fairseq/examples/roberta/multiprocessing_bpe_encoder.py tools/fairseq/fairseq/optim/lr_scheduler/fixed_schedule.py tools/fairseq/fairseq/data/encoders/moses_tokenizer.py tools/fairseq/fairseq/data/noising.py tools/fairseq/examples/simultaneous_translation/eval/scorers/text_scorer.py tools/fairseq/fairseq/models/multilingual_transformer.py tools/fairseq/fairseq/model_parallel/models/roberta/model.py tools/fairseq/fairseq/benchmark/__init__.py tools/fairseq/tests/test_multihead_attention.py tools/fairseq/fairseq/data/strip_token_dataset.py tools/fairseq/fairseq/data/encoders/utils.py tools/fairseq/fairseq_cli/interactive.py tools/fairseq/fairseq/models/lstm.py tools/fairseq/fairseq/optim/lr_scheduler/fairseq_lr_scheduler.py tools/fairseq/fairseq/data/encoders/fastbpe.py tools/fairseq/examples/paraphraser/paraphrase.py tools/fairseq/examples/simultaneous_translation/models/__init__.py tools/fairseq/fairseq/data/offset_tokens_dataset.py tools/fairseq/fairseq/modules/quantization/scalar/modules/qlinear.py tools/fairseq/scripts/split_train_valid_docs.py tools/fairseq/fairseq/logging/meters.py tools/fairseq/fairseq/quantization_utils.py tools/e2e-metrics/pycocoevalcap/eval.py tools/fairseq/tests/test_character_token_embedder.py tools/fairseq/fairseq/optim/lr_scheduler/__init__.py tools/fairseq/fairseq/models/nat/levenshtein_utils.py tools/fairseq/tests/test_iterators.py tools/fairseq/fairseq/modules/quantization/scalar/ops.py tools/fairseq/tests/test_sequence_scorer.py tools/fairseq/examples/simultaneous_translation/modules/__init__.py tools/fairseq/tests/speech_recognition/test_collaters.py tools/fairseq/fairseq/modules/dynamicconv_layer/__init__.py tools/fairseq/fairseq/tasks/legacy_masked_lm.py tools/e2e-metrics/pycocoevalcap/rouge/__init__.py tools/fairseq/fairseq/binarizer.py tools/fairseq/fairseq/optim/sgd.py tools/fairseq/fairseq/criterions/sentence_prediction.py tools/fairseq/fairseq/modules/fairseq_dropout.py tools/fairseq/fairseq/models/bart/hub_interface.py tools/fairseq/fairseq/distributed_utils.py tools/fairseq/fairseq/data/legacy/__init__.py tools/fairseq/examples/speech_recognition/tasks/speech_recognition.py tools/fairseq/scripts/spm_decode.py tools/fairseq/fairseq/model_parallel/__init__.py tools/fairseq/scripts/compare_namespaces.py tools/fairseq/fairseq/tasks/denoising.py tools/fairseq/fairseq/iterative_refinement_generator.py tools/fairseq/fairseq/modules/quantization/scalar/utils.py tools/fairseq/fairseq/registry.py tools/fairseq/fairseq/data/num_samples_dataset.py tools/fairseq/fairseq/modules/dynamicconv_layer/cuda_function_gen.py tools/fairseq/train.py tools/fairseq/tests/speech_recognition/test_data_utils.py tools/fairseq/fairseq/criterions/adaptive_loss.py tools/fairseq/fairseq/modules/quantization/pq/modules/qemb.py tools/fairseq/tests/test_train.py tools/fairseq/fairseq/data/encoders/byte_utils.py tools/fairseq/fairseq/search.py tools/fairseq/fairseq/models/nat/nonautoregressive_transformer.py tools/fairseq/fairseq/criterions/nat_loss.py tools/fairseq/fairseq/modules/sparse_transformer_sentence_encoder_layer.py tools/fairseq/fairseq/optim/fused_adam.py tools/fairseq/examples/simultaneous_translation/criterions/label_smoothed_cross_entropy_latency_augmented.py tools/fairseq/fairseq/models/fairseq_model.py tools/e2e-metrics/pycocoevalcap/bleu/bleu_scorer.py tools/fairseq/fairseq/models/fconv_lm.py tools/fairseq/fairseq/data/encoders/gpt2_bpe.py tools/fairseq/examples/__init__.py tools/fairseq/fairseq/data/iterators.py tools/fairseq/examples/roberta/wsc/wsc_task.py tools/fairseq/fairseq/data/pad_dataset.py tools/fairseq/fairseq/modules/quantization/pq/modules/qconv.py tools/fairseq/fairseq/data/lm_context_window_dataset.py tools/fairseq/fairseq/data/multilingual/__init__.py tools/fairseq/examples/simultaneous_translation/eval/eval_latency.py tools/fairseq/fairseq/model_parallel/models/transformer_lm.py tools/fairseq/fairseq/data/concat_dataset.py tools/fairseq/examples/speech_recognition/criterions/__init__.py tools/fairseq/examples/simultaneous_translation/eval/evaluate.py tools/fairseq/fairseq/modules/quant_noise.py tools/fairseq/fairseq/optim/lr_scheduler/cosine_lr_scheduler.py tools/fairseq/fairseq/modules/sparse_transformer_sentence_encoder.py tools/fairseq/examples/speech_recognition/__init__.py tools/fairseq/fairseq_cli/score.py tools/fairseq/fairseq/data/legacy/masked_lm_dictionary.py tools/e2e-metrics/pycocoevalcap/bleu/__init__.py tools/fairseq/fairseq/modules/quantization/pq/modules/__init__.py tools/fairseq/examples/simultaneous_translation/utils/__init__.py tools/fairseq/fairseq/criterions/__init__.py tools/fairseq/fairseq/data/fairseq_dataset.py tools/fairseq/fairseq/data/mask_tokens_dataset.py tools/fairseq/scripts/spm_train.py tools/fairseq/fairseq/modules/quantization/scalar/modules/qconv.py tools/fairseq/examples/simultaneous_translation/eval/server.py tools/fairseq/fairseq/modules/gumbel_vector_quantizer.py tools/fairseq/fairseq/models/transformer_from_pretrained_xlm.py tools/fairseq/fairseq/models/roberta/__init__.py tools/fairseq/examples/simultaneous_translation/eval/scorers/__init__.py tools/fairseq/fairseq/data/bucket_pad_length_dataset.py tools/fairseq/examples/simultaneous_translation/criterions/__init__.py tools/fairseq/examples/translation_moe/src/translation_moe.py tools/fairseq/fairseq/optim/adamax.py tools/fairseq/fairseq/models/huggingface/__init__.py tools/fairseq/fairseq/data/encoders/bytes.py tools/fairseq/fairseq/models/wav2vec.py tools/fairseq/fairseq/logging/progress_bar.py tools/fairseq/fairseq/tasks/multilingual_denoising.py tools/fairseq/examples/translation_moe/src/__init__.py tools/fairseq/examples/wav2vec/wav2vec_manifest.py tools/fairseq/fairseq/options.py tools/fairseq/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py tools/fairseq/examples/noisychannel/rerank_options.py tools/fairseq/examples/simultaneous_translation/eval/__init__.py tools/fairseq/fairseq/criterions/composite_loss.py tools/fairseq/examples/wav2vec/wav2vec_featurize.py tools/fairseq/fairseq/logging/metrics.py tools/fairseq/tests/test_inference_dropout.py tools/fairseq/tests/test_metrics.py tools/fairseq/fairseq/data/encoders/__init__.py tools/fairseq/fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py tools/fairseq/fairseq/data/indexed_dataset.py tools/fairseq/fairseq/modules/sparse_multihead_attention.py tools/fairseq/fairseq/criterions/label_smoothed_cross_entropy.py tools/fairseq/examples/simultaneous_translation/eval/agents/simul_trans_agent.py tools/fairseq/examples/simultaneous_translation/models/transformer_monotonic_attention.py tools/fairseq/fairseq/data/dictionary.py tools/fairseq/examples/unsupervised_quality_estimation/meteor.py tools/fairseq/fairseq/data/append_token_dataset.py tools/fairseq/fairseq/modules/unfold.py tools/fairseq/fairseq/modules/quantization/scalar/modules/qact.py tools/fairseq/examples/noisychannel/__init__.py tools/fairseq/examples/speech_recognition/data/data_utils.py tools/e2e-metrics/pycocoevalcap/tokenizer/__init__.py tools/fairseq/fairseq/modules/quantization/pq/em.py tools/fairseq/fairseq/data/monolingual_dataset.py tools/fairseq/fairseq/modules/lightconv_layer/cuda_function_gen.py tools/fairseq/examples/noisychannel/rerank_score_bw.py tools/fairseq/fairseq/__init__.py tools/fairseq/fairseq/modules/lightconv_layer/setup.py tools/fairseq/fairseq/data/encoders/hf_byte_bpe.py tools/fairseq/fairseq/models/nat/cmlm_transformer.py tools/fairseq/tests/speech_recognition/asr_test_base.py tools/fairseq/fairseq/data/legacy/block_pair_dataset.py tools/fairseq/fairseq/tokenizer.py tools/fairseq/examples/roberta/wsc/__init__.py tools/fairseq/fairseq/modules/vggblock.py tools/fairseq/fairseq/optim/adagrad.py tools/fairseq/fairseq/optim/__init__.py tools/fairseq/fairseq/models/fairseq_encoder.py tools/fairseq/fairseq/optim/adadelta.py tools/fairseq/fairseq/modules/cross_entropy.py tools/fairseq/fairseq/modules/lightweight_convolution.py tools/fairseq/fairseq/models/roberta/hub_interface.py tools/fairseq/fairseq/data/base_wrapper_dataset.py tools/fairseq/fairseq/checkpoint_utils.py tools/fairseq/fairseq/optim/adam.py tools/fairseq/tests/test_backtranslation_dataset.py tools/fairseq/examples/simultaneous_translation/modules/monotonic_transformer_layer.py tools/fairseq/fairseq/data/lru_cache_dataset.py tools/fairseq/tests/test_concat_dataset.py tools/fairseq/fairseq/nan_detector.py tools/fairseq/examples/roberta/wsc/wsc_criterion.py tools/fairseq/fairseq/data/encoders/space_tokenizer.py tools/fairseq/fairseq/optim/fairseq_optimizer.py tools/fairseq/fairseq/modules/beamable_mm.py tools/fairseq/fairseq/data/audio/raw_audio_dataset.py tools/fairseq/fairseq/file_io.py tools/fairseq/fairseq/models/bart/__init__.py tools/fairseq/fairseq/data/multilingual/multilingual_data_manager.py tools/fairseq/examples/speech_recognition/data/replabels.py tools/fairseq/fairseq/models/fconv.py src/eval_metrics.py tools/fairseq/examples/noisychannel/rerank_score_lm.py tools/fairseq/fairseq/criterions/binary_cross_entropy.py tools/fairseq/fairseq/model_parallel/modules/transformer_sentence_encoder_layer.py tools/fairseq/examples/unsupervised_quality_estimation/repeat_lines.py tools/fairseq/examples/translation_moe/src/logsumexp_moe.py tools/fairseq/fairseq/data/encoders/sentencepiece_bpe.py tools/fairseq/examples/roberta/commonsense_qa/__init__.py tools/fairseq/fairseq/modules/quantization/pq/pq.py tools/e2e-metrics/metrics/pymteval.py tools/fairseq/fairseq/models/transformer_lm.py tools/e2e-metrics/pycocoevalcap/bleu/bleu.py tools/fairseq/tests/test_export.py tools/fairseq/examples/wav2vec/vq-wav2vec_featurize.py tools/fairseq/fairseq/data/encoders/characters.py tools/fairseq/fairseq/tasks/translation_from_pretrained_bart.py tools/fairseq/examples/roberta/preprocess_RACE.py tools/fairseq/tests/test_label_smoothing.py tools/fairseq/fairseq/data/data_utils.py tools/fairseq/fairseq/optim/fused_lamb.py tools/fairseq/fairseq/models/nat/levenshtein_transformer.py tools/fairseq/fairseq/data/multi_corpus_dataset.py tools/fairseq/scripts/shard_docs.py tools/fairseq/fairseq/tasks/cross_lingual_lm.py tools/fairseq/fairseq_cli/generate.py tools/fairseq/fairseq/data/concat_sentences_dataset.py tools/fairseq/fairseq/models/transformer_align.py tools/fairseq/hubconf.py tools/fairseq/examples/megatron_11b/detok.py tools/fairseq/examples/roberta/wsc/wsc_utils.py tools/fairseq/tests/test_token_block_dataset.py tools/fastBPE/setup.py src/rebalanced_encoding.py tools/fairseq/examples/translation_moe/src/mean_pool_gating_network.py tools/fairseq/examples/noisychannel/rerank_utils.py tools/fairseq/fairseq/data/resampling_dataset.py tools/fairseq/fairseq/data/token_block_dataset.py tools/fairseq/fairseq/tasks/translation_from_pretrained_xlm.py tools/e2e-metrics/measure_scores.py tools/fairseq/fairseq/data/plasma_utils.py tools/fairseq/fairseq/data/sort_dataset.py tools/fairseq/examples/noisychannel/rerank.py tools/fairseq/examples/unsupervised_quality_estimation/aggregate_scores.py tools/fairseq/fairseq/legacy_distributed_data_parallel.py tools/fairseq/fairseq/data/backtranslation_dataset.py tools/fairseq/fairseq/modules/quantization/quantization_options.py tools/fairseq/fairseq/modules/quantization/pq/utils.py tools/fairseq/fairseq/incremental_decoding_utils.py tools/fairseq/fairseq/data/denoising_dataset.py tools/fairseq/fairseq/model_parallel/criterions/__init__.py tools/fairseq/fairseq/modules/scalar_bias.py tools/fairseq/fairseq/optim/nag.py tools/fairseq/fairseq/model_parallel/models/__init__.py tools/fairseq/fairseq/criterions/sentence_ranking.py tools/fairseq/fairseq/benchmark/dummy_model.py tools/e2e-metrics/pycocoevalcap/cider/__init__.py tools/fairseq/examples/simultaneous_translation/eval/client.py tools/fairseq/fairseq/criterions/fairseq_criterion.py tools/fairseq/fairseq/modules/dynamic_convolution.py tools/fairseq/fairseq/pdb.py tools/fairseq/fairseq/criterions/masked_lm.py tools/fairseq/fairseq/model_parallel/models/roberta/__init__.py tools/fairseq/fairseq/tasks/__init__.py tools/fairseq/examples/speech_recognition/criterions/CTC_loss.py tools/fairseq/fairseq/sequence_generator.py src/fs_eval.py tools/fairseq/tests/test_binaries.py tools/fairseq/fairseq/modules/dynamic_crf_layer.py tools/fairseq/fairseq/criterions/legacy_masked_lm.py tools/e2e-metrics/pycocoevalcap/meteor/meteor.py tools/fairseq/tests/test_dictionary.py tools/fairseq/fairseq/data/encoders/gpt2_bpe_utils.py tools/fairseq/tests/test_average_checkpoints.py tools/fairseq/fairseq/data/encoders/hf_bert_bpe.py tools/fairseq/examples/byte_level_bpe/gru_transformer.py tools/fairseq/fairseq/modules/downsampled_multihead_attention.py tools/fairseq/setup.py tools/fairseq/examples/speech_recognition/tasks/__init__.py tools/fairseq/fairseq/criterions/cross_entropy.py tools/fairseq/examples/speech_recognition/utils/wer_utils.py tools/fairseq/fairseq/models/nat/__init__.py tools/fairseq/fairseq/modules/positional_embedding.py tools/fairseq/fairseq/data/id_dataset.py tools/fairseq/fairseq/data/transform_eos_lang_pair_dataset.py tools/fairseq/fairseq/utils.py get_score get_repc_v1 longestRepeatedSubstring get_repc get_rep get_seq_rep_n get_repd main _add multiple_replace rebalance_fastbpe msum _make_stats get_high_inflow run_pymteval create_coco_refs create_coco_sys evaluate run_coco_eval run_mteval read_and_group_tsv sent_level_scores read_tsv write_tsv load_data create_mteval_file read_lines read_and_check_tsv NISTScore BLEUScore NGramScore COCOEvalCap Bleu precook BleuScorer cook_test cook_refs Cider precook CiderScorer cook_test cook_refs Meteor my_lcs Rouge PTBTokenizer COCO NumpyExtension main get_hashes_and_lines main _convert_to_bchar _apply_bpe _convert_xml pretokenize _get_chars _concat_files preprocess_iwslt17 _get_bpe _convert_train main _apply_bbpe _get_bytes gru_transformer_big GRUTransformerEncoder GRUTransformerModel gru_transformer_base_architecture main load_score_files score_target_hypo cli_main rerank match_target_hypo cli_main gen_and_reprocess_nbest add_reranking_args get_tuning_parser get_reranking_parser add_tuning_args score_bw cli_main score_lm cli_main random_search cli_main get_score get_prefix_no_bpe get_prefix_from_len rescore_file_name get_num_bpe_tokens_from_len LMOutput lm_scoring make_right_to_left get_score_from_pos remove_bpe reprocess_nbest reprocess get_directories parse_bleu_scoring get_prefix BitextOutputFromGen remove_bpe_dict parse_lm calc_length_from_frac get_full_from_prefix BitextOutput write_reprocessed main main MultiprocessingEncoder main get_examples InputExample CommonsenseQATask WSCCriterion WinograndeCriterion WSCTask WinograndeTask find_span find_token get_spacy_nlp jsonl_iterator convert_sentence_to_json filter_noun_chunks winogrande_jsonl_iterator extended_noun_chunks get_detokenizer LatencyAugmentedLabelSmoothedCrossEntropyCriterion SimulSTLocalEvaluationService SimulSTEvaluationService get_args LatencyScorer start_server SourceHandler ScorerHandler add_args EvalSessionHandler HypothesisHandler ResultHandler Agent SimulTransAgent SimulTransTextAgent SubwordSplitter NoneWordSplitter BPEWordSplitter SentencePieceModelWordSplitter SimulScorer SimulTextScorer transformer_monotonic_iwslt_de_en TransformerMonotonicModel TransformerUnidirectionalModel transformer_unidirectional_iwslt_de_en transformer_monotonic_vaswani_wmt_en_de_big base_monotonic_rchitecture transformer_monotonic_vaswani_wmt_en_fr_big TransformerMonotonicEncoder TransformerMonotonicDecoder MonotonicAttention MonotonicMultiheadAttentionWaitk MonotonicMultiheadAttentionHard MonotonicMultiheadAttentionInfiniteLookback TransformerMonotonicEncoderLayer TransformerMonotonicDecoderLayer exclusive_cumprod lengths_to_mask moving_sum safe_cumprod LatencyInference LatencyMetric LatencyMetricVariance DifferentiableAverageLagging VarianceDelay AverageProportion LatencyTraining AverageLagging process_predictions optimize_models cli_main load_models_and_criterions get_dataset_itr prepare_result_files add_asr_eval_argument main check_args W2lKenLMDecoder W2lDecoder W2lViterbiDecoder ASGCriterion CrossEntropyWithAccCriterion compute_ctc_uer CTCCriterion arr_to_toks AsrDataset Seq2SeqCollater apply_mv_norm lengths_to_encoder_padding_mask calc_mean_invstddev encoder_padding_mask_to_lengths unpack_replabels replabel_symbol pack_replabels main process_sample vggtransformer_base VGGTransformerEncoder prepare_transformer_encoder_params vggtransformer_enc_1 LinearizedConv1d Embedding LayerNorm VGGTransformerEncoderOnly TransformerDecoder base_architecture prepare_transformer_decoder_params vggtransformer_1 vggtransformer_2 VGGTransformerModel base_architecture_enconly VGGTransformerEncoderModel Linear w2l_conv_glu_enc W2lConvGluEncoder W2lConvGluEncoderModel SpeechRecognitionTask get_asr_dataset_from_json offset_to_col Code Token trimWhitespace EditDistance merge_counts AlignmentResult calc_wer_stats coordinate_to_offset calc_wer str2toks get_wer_alignment_codes offset_to_row WERTransformer corpus_bleu intra_ref load_sys merge dictolist multi_ref sentence_bleu main pairwise load_ref LogSumExpMoE MeanPoolGatingNetwork TranslationMoETask main read_translations generate_input run_meteor read_output main main _normalize_spaces FilesDataset DatasetWriter ArgTypes PretrainedWav2VecModel EmbeddingDatasetWriter H5Writer Prediction read_audio EmbeddingWriterConfig main get_parser Binarizer safe_readline BleuStat SacrebleuScorer Scorer load_checkpoint_to_cpu torch_persistent_save load_checkpoint verify_checkpoint_directory load_pretrained_component_from_model load_model_ensemble_and_task prune_state_dict save_checkpoint load_model_ensemble save_state _upgrade_state_dict checkpoint_paths all_reduce_dict call_main distributed_main all_gather_list get_default_group is_master get_world_size all_reduce get_rank infer_init_method distributed_init PathManager cached_path s3_etag http_get s3_request s3_get read_set_from_file get_from_cache filename_to_url load_archive_file url_to_filename split_s3_path request_wrap_timeout get_file_extension from_pretrained GeneratorHubInterface TokenizerHubInterface BPEHubInterface FairseqIncrementalState with_incremental_state IterativeRefinementGenerator LegacyDistributedDataParallel NanDetector csv_str_list parse_args_and_arch eval_str_list get_validation_parser add_distributed_training_args add_checkpoint_args get_preprocessing_parser add_generation_args get_parser eval_bool add_eval_lm_args add_preprocess_args add_model_args add_common_eval_args get_eval_lm_parser add_dataset_args add_interactive_args add_optimization_args get_interactive_generation_parser get_generation_parser eval_str_dict get_training_parser MultiprocessingPdb set_trace Quantizer quantize_model_scalar set_defaults setup_registry LengthConstrainedBeamSearch DiverseSiblingsSearch BeamSearch Sampling Search DiverseBeamSearch EnsembleModel SequenceGenerator EnsembleModelWithAlignment SequenceGeneratorWithAlignment BeamContainer SequenceScorer tokenize_line _get_module_by_path _catalog_shared_params _set_module_by_path Trainer extract_hard_alignment clip_grad_norm_ get_token_to_word_mapping import_user_module move_to_cpu deprecation_warning print_embed_overlap set_incremental_state has_parameters split_paths load_ensemble_for_inference _match_types parse_embedding parse_alignment move_to_cuda post_process_prediction get_available_activation_fns load_embedding get_perplexity new_arange apply_to_sample strip_pad buffered_arange log_softmax resolve_max_positions load_align_dict get_tpu_device eval softmax get_activation_fn item convert_padding_direction make_positions set_torch_seed multi_tensor_total_norm get_incremental_state with_torch_seed fill_with_neg_inf replace_unk CudaEnvironment DummyDataset DummyLMTask DummyMaskedLMTask DummyDataset base_architecture DummyEncoder DummyModel DummyMTTask DummyDataset AdaptiveLoss BinaryCrossEntropyCriterion CompositeLoss CrossEntropyCriterion LegacyFairseqCriterion FairseqCriterion label_smoothed_nll_loss LabelSmoothedCrossEntropyCriterion LabelSmoothedCrossEntropyCriterionWithAlignment compute_cross_entropy_loss LegacyMaskedLmLoss MaskedLmLoss LabelSmoothedDualImitationCriterion SentencePredictionCriterion SentenceRankingCriterion ShardedIterator CountingIterator StreamingEpochBatchIterator BackgroundConsumer BufferedIterator EpochBatchIterating EpochBatchIterator _chunk_iterator GroupedIterator AppendTokenDataset backtranslate_samples BacktranslationDataset BaseWrapperDataset BucketPadLengthDataset ColorizeDataset ConcatDataset ConcatSentencesDataset collect_filtered batch_by_size process_bpe_symbol numpy_seed load_indexed_dataset collate_tokens infer_language_pair _filter_by_size_dynamic filter_by_size DenoisingDataset collate TruncatedDictionary Dictionary FairseqIterableDataset FairseqDataset EpochListening IdDataset infer_dataset_impl make_builder IndexedCachedDataset index_file_path IndexedDatasetBuilder __best_fitting_dtype code dataset_exists IndexedDataset IndexedRawTextDataset make_dataset get_available_dataset_impl read_longs MMapIndexedDatasetBuilder write_longs data_file_path MMapIndexedDataset _warmup_mmap_file LanguagePairDataset collate ListDataset LMContextWindowDataset LRUCacheDataset MaskTokensDataset MonolingualDataset collate MultiCorpusDataset uniform_sampler MultiCorpusSampledDataset _unflatten _flatten NestedDictionaryDataset UnsupervisedMTNoising NoisingDataset WordDropout WordShuffle WordNoising NumelDataset NumSamplesDataset OffsetTokensDataset PadDataset LeftPadDataset RightPadDataset PlasmaArray PrependDataset PrependTokenDataset RawLabelDataset ReplaceDataset ResamplingDataset RollDataset RoundRobinZipDatasets TruncateDataset RandomCropDataset maybe_shorten_dataset SortDataset StripTokenDataset SubsampleDataset TokenBlockDataset TransformEosDataset TransformEosLangPairDataset FileAudioDataset RawAudioDataset Bytes ByteBPE byte_decode smart_byte_decode byte_encode Characters fastBPE GPT2BPE bytes_to_unicode get_pairs get_encoder Encoder BertBPE HuggingFaceByteLevelBPE MosesTokenizer NLTKTokenizer SentencepieceBPE SpaceTokenizer SubwordNMTBPE get_whole_word_mask BlockPairDataset MaskedLMDataset MaskedLMDictionary BertDictionary _lang_token MultilingualDatasetManager _lang_token_index _lang_id load_sampling_weights default_virtual_size_func get_time_gap SampledMultiDataset CollateFormat SampledMultiEpochDataset uniform make_ratio_sampling make_temperature_sampling temperature_sampling SamplingMethod MetersDict type_as AverageMeter Meter StopwatchMeter TimeMeter safe_round get_meter state_dict aggregate reset_meters get_smoothed_value log_scalar log_start_time get_active_aggregators reset log_custom get_meters reset_meter load_state_dict get_smoothed_values log_derived log_stop_time log_speed TqdmProgressBar BaseProgressBar progress_bar _close_writers TensorboardProgressBarWrapper rename_logger format_stat NoopProgressBar SimpleProgressBar build_progress_bar JsonProgressBar CompositeEncoder DistributedFairseqModel FairseqDecoder FairseqEncoder FairseqIncrementalDecoder BaseFairseqModel FairseqEncoderDecoderModel FairseqMultiModel FairseqLanguageModel FairseqModel FairseqEncoderModel PositionalEmbedding ConvTBC LinearizedConv1d Embedding fconv_iwslt_de_en fconv_wmt_en_ro FConvModel base_architecture AttentionLayer fconv_wmt_en_de FConvEncoder extend_conv_spec fconv_wmt_en_fr FConvDecoder Linear FConvLanguageModel base_lm_architecture fconv_lm_dauphin_wikitext103 fconv_lm_dauphin_gbw FConvModelSelfAtt PositionalEmbedding ConvTBC LinearizedConv1d Embedding SelfAttention fconv_self_att_wp base_architecture FConvEncoder FConvDecoder Linear LightConvModel LightConvDecoder LightConvDecoderLayer lightconv_wmt_zh_en_big Embedding LightConvEncoderLayer base_architecture LightConvEncoder lightconv_wmt_en_de_big lightconv_wmt_en_fr_big lightconv_iwslt_de_en lightconv_wmt_en_de Linear LightConvLanguageModel lightconv_lm_gbw base_lm_architecture lstm_wiseman_iwslt_de_en LSTMModel Embedding LSTM lstm_luong_wmt_en_de LSTMEncoder AttentionLayer LSTMCell base_architecture LSTMDecoder Linear base_architecture LSTMLanguageModel MaskedLMEncoder bert_base_architecture base_architecture bert_large_architecture xlm_architecture MaskedLMModel coalesce script_skip_tensor_list expand_2d_or_3d_tensor script_skip_tensor fill_tensors base_multilingual_architecture multilingual_transformer_iwslt_de_en MultilingualTransformerModel transformer_vaswani_wmt_en_de_big transformer_wmt_en_de_big Embedding transformer_wmt_en_de_big_t2t transformer_wmt_en_de TransformerDecoder TransformerModel base_architecture transformer_vaswani_wmt_en_fr_big transformer_iwslt_de_en TransformerEncoder Linear transformer_wmt_en_de_big_align transformer_align TransformerAlignModel TransformerFromPretrainedXLMModel base_architecture upgrade_state_dict_with_xlm_weights TransformerDecoderFromPretrainedXLM TransformerEncoderFromPretrainedXLM transformer_lm_big transformer_lm_gpt2_big transformer_lm_baevski_gbw base_lm_architecture transformer_lm_gpt2_small TransformerLanguageModel transformer_lm_baevski_wiki103 transformer_lm_gpt transformer_lm_gpt2_medium ConvFeatureExtractionModel Wav2VecPredictionsModel ZeroPad1d TransposeLast ConvAggegator norm_block Wav2VecModel base_wav2vec_architecture register_model_architecture register_model build_model BARTHubInterface bart_base_architecture bart_large_architecture mbart_base_architecture mbart_base_wmt20_architecture BARTModel mbart_large_architecture BARTClassificationHead HuggingFaceGPT2LanguageModel HuggingFaceGPT2Decoder hf_gpt2_medium hf_gpt2_xl default_architecture hf_gpt2_large _skeptical_unmasking cmlm_base_architecture cmlm_wmt_en_de CMLMNATransformerModel FairseqNATModel ensemble_decoder FairseqNATEncoder ensemble_encoder FairseqNATDecoder insertion_base_architecture InsertionTransformerDecoder InsertionTransformerModel NegativeDistanceScore _get_ins_targets _apply_ins_words IterNATransformerModel gumbel_noise _sequential_poisoning iter_nat_wmt_en_de inat_base_architecture LevenshteinTransformerDecoder levenshtein_transformer_vaswani_wmt_en_de_big levenshtein_transformer_wmt_en_de levenshtein_base_architecture LevenshteinTransformerModel levenshtein_transformer_wmt_en_de_big_t2t _fill _get_del_targets _apply_ins_masks _skip_encoder_out _get_ins_targets _apply_ins_words _apply_del_words _skip load_libnat nacrf_base_architecture NACRFTransformerModel EnsembleLevT BasicEnsembleModel _EnsembleModelEncoder _mean_pooling _argmax NATransformerModel base_architecture NATransformerDecoder _uniform_assignment nonautoregressive_transformer_wmt_en_de spacy_tokenizer spacy_nlp align_bpe_to_words align_features_to_words RobertaHubInterface RobertaEncoder RobertaLMHead base_architecture roberta_large_architecture RobertaClassificationHead roberta_base_architecture xlm_architecture RobertaModel CamembertModel XLMRModel MegatronTrainer VocabParallelCrossEntropyCriterion ModelParallelTransformerModel ModelParallelTransformerEncoder ModelParallelTransformerDecoder transformer_lm_megatron transformer_lm_megatron_11b ModelParallelTransformerLanguageModel ModelParallelRobertaModel ModelParallelRobertaClassificationHead ModelParallelRobertaLMHead base_architecture roberta_large_architecture ModelParallelRobertaEncoder roberta_base_architecture ModelParallelMultiheadAttention ModelParallelTransformerDecoderLayer ModelParallelTransformerEncoderLayer ModelParallelTransformerSentenceEncoder ModelParallelTransformerSentenceEncoderLayer GumbelVectorQuantizer AdaptiveInput TiedLinear AdaptiveSoftmax TiedHeadModule BeamableMM CharacterTokenEmbedder Highway ConvTBC cross_entropy _cross_entropy_pytorch SingleHeadAttention Downsample GatedLinear DownsampledMultiHeadAttention Linear DynamicConv DynamicConv1dTBC Linear DynamicCRF logsumexp FairseqDropout Fp32GroupNorm gelu_accurate gelu GradMultiply KmeansVectorQuantizer LayerDropModuleList FusedLayerNorm LayerNorm Fp32LayerNorm LearnedPositionalEmbedding LightweightConv LightweightConv1d LightweightConv1dTBC LinearizedConvolution MultiheadAttention PositionalEmbedding quant_noise ScalarBias scalar_bias SinusoidalPositionalEmbedding SparseMultiheadAttention SparseTransformerSentenceEncoder SparseTransformerSentenceEncoderLayer Linear TransformerEncoderLayer TransformerDecoderLayer TransformerSentenceEncoder init_bert_params TransformerSentenceEncoderLayer unfold1d infer_conv_output_dim VGGBlock _pair gen_forward gen_backward dynamicconvFunction DynamicconvLayer gen_forward gen_backward LightconvLayer lightconvFunction convert_yaml_to_tuple parse_config_yaml EmptyClusterResolveError EM PQ quantize_model_ get_param attrsetter SizeTracker get_layers PQConv2d PQEmbedding PQLinear emulate_int quantize emulate_int8_channel emulate_int8_histogram emulate_int8_tensor quantize_model_ ActivationQuantizer IntConv2d IntEmbedding IntLinear Adadelta Adafactor FairseqAdafactor Adagrad Adam FairseqAdam FairseqAdamax Adamax FairseqBMUF FairseqOptimizer MemoryEfficientFP16Optimizer _FP16OptimizerMixin DynamicLossScaler FP16Optimizer _MemoryEfficientFP16OptimizerMixin FusedAdamV2 FusedAdamV1 get_fused_adam_class FairseqLAMB NAG FairseqNAG SGD CosineSchedule FairseqLRScheduler FixedSchedule InverseSquareRootSchedule PolynomialDecaySchedule ReduceLROnPlateau TriangularSchedule TriStageLRSchedule AudioPretrainingTask CrossLingualLMTask DenoisingTask FairseqTask LanguageModelingTask LegacyMaskedLMTask MaskedLMTask MultilingualDenoisingTask MultiLingualMaskedLMTask MultilingualTranslationTask _lang_token _lang_token_index _get_bt_dataset_key parse_lambda_config SemisupervisedTranslationTask _get_denoising_dataset_key SentencePredictionTask SentenceRankingTask load_langpair_dataset TranslationTask TranslationFromPretrainedBARTTask TranslationFromPretrainedXLMTask TranslationLevenshteinTask get_time_gap TranslationMultiSimpleEpochTask setup_task register_task get_task main WordStat cli_main main _main cli_main main buffered_read cli_main make_batches dataset_dest_file binarize_alignments binarize get_offsets dataset_dest_prefix cli_main main get_parser cli_main validate get_valid_stats should_stop_early validate_and_save cli_main main train get_training_stats tpu_data_loader main cli_main main last_n_checkpoints average_checkpoints main main main main get_parser parse_checkpoints last_n_checkpoints main every_n_checkpoints main main main main TestAverageCheckpoints ModelWithSharedParameter TestBacktranslationDataset train_masked_lm train_legacy_masked_language_model TestTranslation eval_lm_main TestOptimizers TestStories TestLanguageModeling train_roberta_head create_dummy_roberta_head_data train_masked_language_model TestMaskedLanguageModel train_language_model setup_args single_gpu_training train_step setup_model_loss_criterion Model TestBMUF TestCharacterTokenEmbedder TestConcatDataset TestConvTBC TestDictionary _test_save_and_load TestExportModels DummyTask get_dummy_task_and_parser get_dummy_dictionary TestFileIO TestInferenceDropout TestIterators TestLabelSmoothing get_dummy_dictionary DummyTask get_dummy_task_and_parser TestJitLSTMModel TestMemoryEfficientFP16 TestMetrics TestMultiheadAttention TestMultiCorpusSampledDataset TestDataNoising TestReproducibility TestResamplingDataset TestJitEnsemble TestJitSequenceGeneratorBase TestExportSearch TestSequenceGeneratorBase TestDiverseSiblingsSearch TestJitSequeneceGenerator TestDiverseBeamSearch TestTopPSamplingSearch DummyTask get_dummy_task_and_parser TestSequeneceGenerator get_dummy_dictionary TestSequenceScorer TestSparseMultiheadAttention TestTokenBlockDataset mock_dict mock_trainer get_trainer_and_epoch_itr TestLoadCheckpoint TestUtils create_dummy_data TestDataset TestEncoder TestAdditionalInputModel sequence_generator_setup train_translation_model dummy_dataloader TestModel preprocess_translation_data TestAdditionalInputEncoder preprocess_lm_data generate_main dummy_dictionary TestReshapingModel TestReshapingEncoder TestTranslationTask TestIncrementalDecoder TestOptimizersGPU TestQuantization TestTranslationGPU _quantize_language_model TestFairseqEncoderBase TestFairseqEncoderModelBase check_decoder_output CrossEntropyCriterionTestBase get_dummy_encoder_output TestFairseqDecoderBase DummyEncoder DummyTask get_dummy_input get_dummy_task_and_parser TestBaseFairseqModelBase get_dummy_dictionary DummyEncoderModel check_encoder_output _current_postion_info TestFairseqEncoderDecoderModelBase TestSeq2SeqCollator CrossEntropyWithAccCriterionTest DataUtilsTest VGGTransformerModelTest_big VGGTransformerModelTest_mid VGGTransformerModelTest_base VGGTransformerEncoderTest TransformerDecoderTest sorted list tuple Counter set append max range len enumerate range enumerate append tuple range len join replace len sum range count join sum range len longestRepeatedSubstring get_repc get_rep get_seq_rep_n split sum get_repd len get_score print readlines dumps mean append lil_matrix split range enumerate len print join compile join list todense print tolist set unsqueeze msum split tensor max range len decode read join print len system write save max get_high_inflow split print match sub I append read_lines print read_tsv read_lines append read_tsv zip insert append enumerate append enumerate append enumerate print search read_and_group_tsv read_tsv I read_lines read_and_check_tsv len run_pymteval update join run_coco_eval print run_mteval dumps flush join decode print check_output mkdtemp group realpath rmtree dirname create_mteval_file float BLEUScore print NISTScore zip append create_coco_refs COCOEvalCap create_coco_sys evaluate print createIndex COCO loadRes run_coco_eval score extend write_tsv reset zip append enumerate log old_div defaultdict tuple split range len get items precook list min append float sum max len list items precook max range len hexdigest add_argument set ArgumentParser parse_args MosesTokenizer Args namedtuple join Train Args namedtuple ByteBPE SentencepieceBPE Args namedtuple join remove _apply_bpe _convert_to_bchar _concat_files _convert_xml pretokenize _get_chars _get_bpe _convert_train _apply_bbpe _get_bytes bbpe_vocab preprocess_iwslt17 char_vocab bpe_vocab root byte_vocab encoder_embed_dim encoder_ffn_embed_dim getattr decoder_embed_dim getattr gru_transformer_base_architecture replace files MosesDetokenizer input get_score unk eos list sorted result_string len add pad backwards append no_bpe_target range Scorer Dictionary parse_bleu_scoring keys load_score_files print encode_line get_full_from_prefix split print normalize argmax score_target_hypo data_dir_name right_to_left2 lm_name backwards2 rescore_file_name target_prefix_frac right_to_left1 lm_dict backwards1 list gen_model_name LMOutput diff_bpe all_shards num_rescore append range gen_subset nbest_list model2_name num_shards prefix_len remove_bpe sampling get_directories BitextOutputFromGen print source_prefix_frac model1_name BitextOutput list num_shards gen_model_name prefix_len data_dir_name gen_and_reprocess_nbest source_prefix_frac write_hypos score_bw score_lm sampling all_shards num_rescore get_directories target_prefix_frac range gen_subset match_target_hypo rerank get_reranking_parser parse_args_and_arch data backwards_score_dict_dir data_dir_name parse_args_and_arch target get_preprocessing_parser source rescore_file_name target_prefix_frac shard_id backwards1 str gen_model_name diff_bpe call dirname num_rescore no_bpe_target hypo parse_args gen_subset nbest_list model2_name no_bpe_hypo num_shards prefix_len sampling get_directories no_bpe_source main join score_dict_dir BitextOutputFromGen print source_lang source_prefix_frac model1_name get_generation_parser rescore_bpe_code target_lang write_reprocessed makedirs gen_and_reprocess_nbest add_reranking_args get_parser add_reranking_args get_parser add_tuning_args add_argument_group add_argument add_argument_group add_argument data_dir_name right_to_left2 parse_args_and_arch backwards2 rescore_file_name right_to_left1 target_prefix_frac shard_id backwards1 gen_model_name num_rescore gen_subset model2_name num_shards prefix_len sampling get_directories print source_lang source_prefix_frac model1_name get_generation_parser target_lang score_bw language_model data_dir_name lm_name rescore_file_name lm_dict target_prefix_frac shard_id gen_model_name lm_scoring num_rescore gen_subset nbest_list num_shards prefix_len sampling get_directories lm_bpe_code BitextOutputFromGen print source_lang source_prefix_frac target_lang score_lm seed list tune_param tune_subset concatenate Namespace gen_subset index copy rerank nbest_list share_weights array range enumerate len random_search get_tuning_parser int search group span append float compile split strip search group span append float enumerate compile split get_prefix_no_bpe len remove_bpe ceil split split sum get_prefix_no_bpe join reverse split rstrip replace remove_bpe search compile strip len get_num_bpe_tokens_from_len calc_length_from_frac append sum range len str join dirname no_bpe_hypo parse_args_and_arch call get_eval_lm_parser get_preprocessing_parser main no_bpe_target no_bpe_source write_reprocessed parse_args join user_dir info strip eval dirname abspath fr2en en2fr exists gen_paraphrases join listdir split str label write close paragraph open output_dir input_dir range get_examples makedirs len replace split enumerate add text lower idx startswith len MosesDetokenizer load get_spacy_nlp get_detokenizer append lower items list hasattr add_argument add_args parse_known_args getattr ArgumentParser parse_args parse_known_args add_argument ArgumentParser listen start write Application base_architecture getattr transformer_iwslt_de_en base_monotonic_rchitecture transformer_vaswani_wmt_en_de_big transformer_iwslt_de_en size list cat safe_cumprod cumsum exp log item t size unsqueeze view t size unsqueeze new_ones add_argument format print debug DecodePieces string cpu split load_checkpoint_to_cpu setup_task build_model load_state_dict build_criterion append fp16 make_generation_fast_ cuda half setup_task data process_predictions import_user_module StopwatchMeter get_dataset_itr SentencePieceProcessor dataset stop log check_args inference_step tolist progress_bar load_dataset sum gen_subset update format optimize_models results_path start TimeMeter avg prepare_result_files beam enumerate n criterion target_dictionary build_generator Load path load_models_and_criterions pathsep split cpu len add_asr_eval_argument get_generation_parser main append str Token align codes tolist append range arr_to_toks mean var any calc_mean_invstddev size item append replabel_symbol range index append range int join length rate channels map encode_line info EncodeAsPieces endswith Sample cpu_count name from_iterable dictionary dump audio_format load namedtuple labels output Namespace Namespace bias LinearizedConvolution sqrt normal_ weight constant_ getattr getattr getattr getattr getattr getattr getattr append Token split WERTransformer WERTransformer WERTransformer items list ref intra_ref load_sys sys multi_ref load_ref pairwise merge list items sorted append rstrip startswith _corpus_bleu sys_len totals ref_len _corpus_bleu counts compute_bleu range append range len list corpus_bleu print add choice set mean zip append argmax range len list corpus_bleu print extend from_iterable zip append pairwise range enumerate len float input_file join defaultdict split append open combinations list sorted strip write mkstemp keys range open mkstemp call write remove remove factorial append float sum open read_translations generate_input meteor run_meteor read_output repetitions infile _normalize_spaces repeat_times read add_argument ArgumentParser seed Random realpath ext tell StopwatchMeter save_dir hasattr OrderedDict getattr sum update epoch get_num_updates format copy start info remove lexists end_of_epoch best_function stop checkpoint_paths makedirs join epoch format optimizer_overrides get_train_iterator reset_optimizer lr_step reset_lr_scheduler eval getattr load_state_dict restore_file save_dir items list setattr _upgrade_state_dict load_model_ensemble_and_task load_checkpoint_to_cpu setup_task replace build_model load_state_dict append fullmatch append listdir compile enumerate range move_to_cpu has_parameters state_dict get items list getattr set_defaults max max_positions list group search create_pruning_pass info append keys load_checkpoint_to_cpu list isinstance OrderedDict load_state_dict startswith keys join makedirs get int format all check_output distributed_world_size randint is_master warn distributed_rank get_model_parallel_rank get_ordinal setLevel cuda seed distributed_init_method get_rank format init_process_group get_local_ordinal info is_available model_parallel_size INFO model_parallel_cuda_manual_seed WARNING gethostname rendezvous mark_step all_reduce initialize_model_parallel is_initialized pop set_device main after_distributed_init_fn device_id distributed_init spawn distributed_main distributed_rank set_sharing_strategy getattr main infer_init_method device_id get_default_group _cpu_buffer copy_ move_to_cpu loads list tolist _buffer get_rank append range pack get_world_size ByteTensor unpack zero_ pin_memory bytes dumps all_reduce cpu len list _all_reduce_dict OrderedDict tensor to keys cached_path join remove format move mkdtemp rmtree info encode hexdigest sha256 str join isinstance str urlparse isinstance exists path netloc urlparse startswith resource split_s3_path Object resource split_s3_path download_fileobj enumerate get update partial write close tqdm request_wrap_timeout iter_content len get str s3_etag join partial list isinstance url_to_filename filter request_wrap_timeout startswith listdir head makedirs set items list join isinstance Namespace load_archive_file import_user_module load_model_ensemble_and_task startswith abspath exists tuple add_preprocess_args get_parser add_model_args add_optimization_args add_dataset_args add_distributed_training_args add_checkpoint_args get_parser add_dataset_args add_distributed_training_args add_interactive_args add_generation_args get_parser add_eval_lm_args add_dataset_args add_distributed_training_args get_parser add_argument_group add_common_eval_args add_dataset_args add_distributed_training_args get_parser eval isinstance eval isinstance items list hasattr max_tokens add_argument_group add_argument import_user_module parse_known_args add_args getattr ArgumentParser modify_parser parse_args set_defaults bf16 max_sentences items list replace import_user_module parse_known_args add_argument_group add_argument add_argument_group add_argument add_argument_group device_count add_argument max add_argument_group add_argument add_argument_group add_argument add_argument add_argument_group add_common_eval_args add_argument add_argument_group add_common_eval_args add_argument add_argument_group add_argument add_argument_group add_argument f_back MultiprocessingPdb quantize_model_ getattr replace set items list Namespace dest add_args ArgumentParser setattr _actions default strip sub append items list getattr split setattr getattr split deprecation_warning format set info symbols keys len range len get tokenize_line enumerate unk_string replace_unk string encode_line int resize_ arange LongTensor remainder arange size eq expand_as sum long hasattr get list norm stack device append zeros keys norm list isinstance warn aggregate_norm_fn stack clamp_ mul_ Tensor float multi_tensor_total_norm list isinstance _match_types tuple min map zip map_value_update join user_dir insert import_module getattr dirname abspath exists split warn deprecation_warning train training parameters next manual_seed set_rng_state get_rng_state set_torch_seed int split IntTensor enumerate len accumulate list len squeeze get_token_to_word_mapping zip append float max size squeeze size masked_fill_ eq unsqueeze sum log_softmax nll_loss append generate_fn collate_fn listdir split copy_tensor max fill_ enumerate infer_dataset_impl format count make_dataset info append len seed int get_state hash append function fromiter collect_filtered format size tolist _filter_by_size_dynamic warning len fromiter lexsort array strip rstrip LongTensor sort index_select sum merge exists empty readinto write array list keys shape compute_alignment_weights item zeros cat isinstance update str list items isinstance OrderedDict enumerate int list items OrderedDict split RandomCropDataset TruncateDataset sub byte_decode min range len append list range ord add set list map build_bpe ByteTensor range len _lang_token index index sum argmax sum hasattr clear MetersDict uuid4 str clear update MetersDict setdefault copy update AverageMeter get_active_aggregators add_meter get_active_aggregators _DerivedMeter add_meter update get_active_aggregators TimeMeter reset add_meter start StopwatchMeter get_active_aggregators add_meter get_active_aggregators stop update new_meter_fn get_active_aggregators add_meter get_meter reset reset get_meters items list MetersDict TqdmProgressBar FbTbmfWrapper NoopProgressBar SimpleProgressBar JsonProgressBar getattr sum format isinstance tolist round avg is_tensor name close values dict DistributedDataParallel GossipDataParallel find_unused_parameters append normal_ weight constant_ normal_ LearnedPositionalEmbedding weight constant_ bias normal_ weight constant_ bias sqrt normal_ weight constant_ base_architecture getattr base_architecture getattr base_architecture getattr base_architecture getattr getattr base_lm_architecture getattr base_lm_architecture getattr zero_ zero_ zero_ base_architecture getattr xavier_uniform_ encoder_layers encoder_ffn_embed_dim decoder_kernel_size_list attention_dropout encoder_embed_dim decoder_layers encoder_kernel_size_list decoder_embed_dim base_architecture getattr base_architecture base_architecture getattr getattr lightconv_wmt_en_de_big getattr lightconv_wmt_en_de_big attention_dropout decoder_kernel_size_list decoder_layers decoder_embed_dim base_lm_architecture getattr uniform_ uniform_ named_parameters named_parameters uniform_ uniform_ dropout base_architecture getattr dropout base_architecture getattr base_architecture getattr bert_base_architecture getattr base_architecture getattr append enumerate append size cat expand_2d_or_3d_tensor size sum type_as base_architecture getattr getattr base_multilingual_architecture base_architecture getattr base_architecture base_architecture getattr getattr transformer_vaswani_wmt_en_de_big getattr transformer_vaswani_wmt_en_de_big getattr transformer_vaswani_wmt_en_de_big base_architecture getattr getattr transformer_wmt_en_de_big load_checkpoint_to_cpu list keys transformer_base_architecture hasattr base_lm_architecture getattr transformer_lm_big getattr transformer_lm_big getattr base_lm_architecture getattr base_lm_architecture getattr base_lm_architecture getattr base_lm_architecture getattr Sequential Fp32LayerNorm Fp32GroupNorm TransposeLast getattr encoder_embed_dim encoder_ffn_embed_dim getattr decoder_embed_dim getattr bart_large_architecture getattr bart_large_architecture getattr bart_base_architecture getattr mbart_base_architecture getattr default_architecture getattr default_architecture getattr default_architecture getattr long encoder_embed_dim encoder_ffn_embed_dim getattr decoder_embed_dim cmlm_base_architecture list view size scatter_ zip float long suggested_ed2_path type_as masked_fill_ eq masked_fill gather float encoder_embed_dim encoder_ffn_embed_dim getattr decoder_embed_dim size rand masked_fill_ randint long range encoder_embed_dim encoder_ffn_embed_dim getattr decoder_embed_dim inat_base_architecture encoder_embed_dim encoder_ffn_embed_dim getattr decoder_embed_dim levenshtein_base_architecture levenshtein_base_architecture getattr levenshtein_transformer_vaswani_wmt_en_de_big getattr load_libnat load_libnat ne cumsum scatter_ new_zeros masked_fill_ sum max masked_scatter ne size masked_fill_ eq expand_as gather Tensor isinstance append size sum cat base_architecture getattr mean sum type_as float max detach base_architecture append next startswith new Counter stack unsqueeze append sum max range len English create_tokenizer spacy_nlp base_architecture base_architecture getattr base_lm_architecture getattr base_lm_architecture getattr log_softmax apply is_available sqrt pi is_available SinusoidalPositionalEmbedding register_forward_pre_hook isinstance Embedding normal_ zero_ Linear pad size unsqueeze as_strided isinstance conv_op transpose randn broadcast list get_param Embedding map encode update PQ info Linear isinstance PQLinear contiguous clone Conv2d get_layers is_initialized PQConv2d PQEmbedding list itemgetter map named_parameters compile str getattr __name__ obs type_as HistogramObserver calculate_qparams float obs PerChannelMinMaxObserver get_qparams type_as calculate_qparams obs type_as MinMaxObserver __dict__ tuple ActivationQuantizer __new__ keys import_module split split_exists join bos format count ConcatDataset dataset_exists PrependTokenDataset load_indexed_dataset index StripTokenDataset info eos AppendTokenDataset TruncateDataset append len rstrip softmax_batch prepare_for_inference_ string output_word_stats cuda values LMContextWindowDataset list SequenceScorer sorted set_device numel half add output_word_probs eq getattr load_model_ensemble generate device_id fp16 item setattr keys add_bos_token int dict any next_epoch_itr get_eval_lm_parser call_main setup_task getLogger prepare_for_inference_ unk add_string import_user_module StopwatchMeter print_alignment string warning eos cuda log split_paths sacrebleu seed basicConfig print_step inference_step hasattr result_string progress_bar tolist post_process_prediction map half build_bpe add pad getattr load_model_ensemble load_dataset sum gen_subset update SacrebleuScorer format strip_pad Scorer load_align_dict start build_tokenizer TimeMeter remove_bpe avg info fp16 beam set_torch_seed enumerate n join print target_dictionary build_generator path replace_unk decode_fn encode_line stop cpu get_original_text next_epoch_itr next_epoch_itr print_alignment make_batches buffered_read post_process_prediction map build_bpe pad buffer_size src_lengths strip_pad src_tokens resolve_max_positions load_align_dict build_tokenizer remove_bpe zip set_torch_seed source_dictionary replace_unk decode_fn max_positions get_interactive_generation_parser load_dictionary destdir tgtdict build_dictionary align_suffix save max srcdict get_task joined_dictionary addHandler make_all alignfile dict_path FileHandler task train_path source_lang make_all_alignments target_lang dataset_dest_file make_builder finalize dataset_dest_file make_builder finalize format destdir source_lang only_source target_lang dataset_dest_prefix parse_args get_preprocessing_parser stdin print score sentence_bleu Dictionary get_parser sacrebleu tpu is_master Trainer Quantizer arch save_dir max_tokens verify_checkpoint_directory distributed_world_size MegatronTrainer get_lr epoch build_model next_epoch_idx build_criterion lr_step get_train_iterator __name__ load_checkpoint rendezvous mark_step reset train max_sentences patience format getattr info mark_step rendezvous get_tpu_device log progress_bar getattr GroupedIterator epoch format get_num_updates begin_epoch validate_and_save cumulative_training_time info get_training_stats enumerate print split tpu_data_loader get_smoothed_values next_epoch_itr reset_meters get_num_updates save_checkpoint info validate elapsed_time round format get_valid_stats print progress_bar fixed_validation_seed getattr info append tpu_data_loader get_smoothed_values set_torch_seed next_epoch_itr get_num_updates best_checkpoint_metric hasattr format best save_checkpoint best_function get_training_parser profile load_model_ensemble_and_task all_gather_list valid_step vars get_validation_parser items list isinstance clone OrderedDict HalfTensor div_ float keys is_floating_point len int append group fullmatch ls compile inputs add_mutually_exclusive_group num_epoch_checkpoints last_n_checkpoints num_update_checkpoints average_checkpoints mosesdecoder_dir fast_align_dir print_keys load_indexed_dataset get_parser append fullmatch parse_checkpoints parse_checkpoints remove root_dirs exit copyfile save_last walk model main list parse_args_and_arch get_training_parser join mkdir _create_dummy_data main parse_args_and_arch get_training_parser main parse_args_and_arch get_training_parser main get_validation_parser parse_args_and_arch get_training_parser main get_eval_lm_parser parse_args_and_arch main list parse_args_and_arch get_training_parser SGD input_size parameters Model FairseqBMUF manual_seed nb_classes cuda distributed_init CrossEntropyLoss backward model loss_fn train step data train_step randn setup_model_loss_criterion set_device input_size random_ parameters is_available nb_classes cuda range cat randint format distributed_world_size Namespace format add_symbol Dictionary range enumerate setup_task parse_args ArgumentParser add_args MagicMock MagicMock mock_dict view sizes mock_trainer TokenBlockDataset EpochBatchIterator LanguagePairDataset str finalize Dictionary range add_symbol DataLoader TestDataset enumerate len setup_task Namespace LongTensor build_model target_dictionary dummy_dictionary eos _create_dummy_data _create_dummy_alignment_data main parse_args get_preprocessing_parser main parse_args get_preprocessing_parser main get_validation_parser parse_args_and_arch get_training_parser stdin parse_args_and_arch get_generation_parser main StringIO main parse_args_and_arch get_training_parser randn sort collate_tokens from_numpy index_select append randint range astype float32 from_numpy t_ randint basename format filename currentframe f_lineno _current_postion_info _current_postion_info | # A Theoretical Analysis of the Repetition Problem in Text Generation This repository share the code for the paper "A Theoretical Analysis of the Repetition Problem in Text Generation" in AAAI 2021. The repetition problem has been observed in nearly all text generation models. We theoretically prove that this problem is, unfortunately, caused by the traits of our language itself. There exists too many words predicting the same word as the subsequent word with high probability. Consequently, it is easy to go back to that word and form repetitions. We dub this problem as the **high inflow problem**. Based on the theoretical analysis, we propose a novel **rebalanced encoding** approach to alleviate the high inflow problem. [[Paper]](https://ojs.aaai.org/index.php/AAAI/article/view/17520/17327) [[Slides]](https://github.com/fuzihaofzh/repetition-problem-nlg/blob/main/aaai21-repetition.pdf) [[Video]](https://slideslive.com/38948354/a-theoretical-analysis-of-the-repetition-problem-in-text-generation) [[arXiv Paper with Appendix]](https://arxiv.org/pdf/2012.14660.pdf) <p class="aligncenter"> <img src="https://user-images.githubusercontent.com/1419566/103292227-5fb86780-4a28-11eb-97c5-44ace9e0f6cb.png" width="75%" /> </p> | 2,141 |
fvancesco/emoji_modifiers | ['word embeddings'] | ['How Gender and Skin Tone Modifiers Affect Emoji Semantics in Twitter'] | example.py print_emo_nn print_nn conv_sw2v_to_emo startswith append join print conv_sw2v_to_emo append print conv_sw2v_to_emo startswith | ## How Gender and Skin Tone Modifiers Affect Emoji Semantics in Twitter #### Francesco Barbieri and Jose Camacho Collados The following repository includes the code and pre-trained embeddings from the paper *[How Gender and Skin Tone Modifiers Affect Emoji Semantics in Twitter](http://aclweb.org/anthology/S18-2011)* (*SEM 2018). <img src="diff.png" width="500"> ### Use our embeddings We release the two sets of 100-dimensional SW2V embeddings trained on Twitter (USA-based, English): 1. Word, base emoji and modifier embeddings. The vocabulary includes words (e.g. *house*, *car*, ...), base emojis (without sex or skin tone modifiers, e.g. 👍), and modifiers (e.g. male/female, or light/dark skin tone). Download embeddings [here](https://drive.google.com/open?id=1xcxfMyewMFgWjVg_UqtIBz273z0jq4x-) [~300 MB] 2. Word and emoji (base and modified) embeddings. The vocabulary includes words (e.g. *house*, *car*, ...) and emojis, both base (without sex or skin tone modifiers, e.g. 👍), and with modifiers (e.g. 👍🏻,👍🏽,👍🏿). Download embeddings [here](https://drive.google.com/open?id=1UuO9EKrJGElAjrjSJ4PspQZ4dNLGo3ya) [~300 MB] | 2,142 |
fyangneil/pavement-crack-detection | ['edge detection', 'semantic segmentation'] | ['Feature Pyramid and Hierarchical Boosting Network for Pavement Crack Detection'] | python/caffe/io.py python/caffe/test/test_python_layer.py scripts/download_model_binary.py python/caffe/net_spec.py examples/fphb/solve_fphb_crack.py python/caffe/test/test_net.py tools/extra/resize_and_crop_images.py python/draw_net.py python/caffe/test/test_net_spec.py src/caffe/test/test_data/generate_sample_data.py examples/fphb/solve_fpn_crack.py python/caffe/draw.py python/caffe/pycaffe.py tools/extra/extract_seconds.py scripts/cpp_lint.py python/classify.py python/caffe/proto/caffe_pb2.py python/caffe/test/test_solver.py python/caffe/classifier.py test.py python/caffe/test/test_python_layer_with_param_str.py tools/extra/parse_log.py python/caffe/__init__.py python/caffe/test/test_layer_type_list.py scripts/copy_notebook.py python/caffe/detector.py python/detect.py examples/fphb/CustomSigmoidCrossEntropyLossLayer.py plot_single_scale save_single_scale CustomSigmoidCrossEntropyLossLayer interp_surgery upsample_filt interp_surgery upsample_filt main main main parse_args Classifier Detector get_edge_label draw_net get_layer_label get_pydot_graph choose_color_by_layertype get_pooling_types_dict draw_net_to_file Transformer blobproto_to_array datum_to_array array_to_blobproto arraylist_to_blobprotovecor_str array_to_datum resize_image blobprotovector_str_to_arraylist load_image oversample Layers Function Parameters Top NetSpec assign_proto param_name_dict to_proto _Net_blobs _Net_forward_all _Net_set_input_arrays _Net_backward _Net_params _Net_forward _Net_outputs _Net_forward_backward_all _Net_blob_loss_weights _Net_batch _Net_inputs TestLayerTypeList simple_net_file TestNet lenet TestNetSpec silent_net anon_lenet exception_net_file parameter_net_file SimpleLayer TestPythonLayer ParameterLayer python_net_file ExceptionLayer SimpleParamLayer TestLayerWithParam python_param_net_file TestSolver ParseNolintSuppressions CheckVlogArguments CheckSectionSpacing FindNextMultiLineCommentEnd ReplaceAll CheckForFunctionLengths _SetOutputFormat _IsTestFilename _VerboseLevel CheckBraces RemoveMultiLineComments ResetNolintSuppressions CheckForNonStandardConstructs _SetVerboseLevel PrintUsage _NestingState CheckIncludeLine CheckAccess _CppLintState Search CheckInvalidIncrement RemoveMultiLineCommentsFromRange CleansedLines CheckForBadCharacters UpdateIncludeState FindPreviousMatchingAngleBracket CheckEmptyBlockBody FindNextMultiLineCommentStart Match _NamespaceInfo CheckMakePairUsesDeduction CheckCheck IsBlankLine _SetFilters ProcessLine _FunctionState CheckPosixThreading GetLineWidth GetHeaderGuardCPPVariable IsCppString _IncludeState CheckSpacing _ClassInfo CheckForCopyright IsErrorSuppressedByNolint ProcessFileData CheckForMultilineCommentsAndStrings CloseExpression _PreprocessorInfo _OutputFormat CheckForIncludeWhatYouUse CheckSpacingForFunctionCall FindEndOfExpressionInLine FindNextMatchingAngleBracket _SetCountingStyle ProcessFile _IncludeError CleanseRawStrings CheckAltTokens CheckForNewlineAtEOF ParseArguments CheckForNonConstReference PrintCategories _Filters main FilesBelongToSameModule CheckCStyleCast FileInfo _BlockInfo CheckForHeaderGuard CheckCaffeDataLayerSetUp ReverseCloseExpression CleanseComments _DropCommonSuffixes _ClassifyInclude CheckStyle CheckCaffeAlternatives FindStartOfExpressionInLine _ShouldPrintError CheckComment Error _GetTextInside CheckLanguage CheckCaffeRandom GetPreviousNonBlankLine reporthook parse_readme_frontmatter model_checks_out valid_dirname get_start_time extract_seconds extract_datetime_from_line get_log_created_year write_csv parse_log fix_initial_nan_learning_rate save_csv_files main parse_args parse_line_for_net_output ResizeCropImagesMapper PILResizeCrop OpenCVResizeCrop subplot str set_xticklabels set_yticklabels tight_layout imshow set_ticks_position figure range len range savemat imsave len print shape upsample_filt model_def endswith ArgumentParser save mean_file channel_swap output_file dirname expanduser parse_args input_file predict Classifier set_mode_cpu load time isdir print add_argument set_mode_gpu pretrained_model gpu len DataFrame Detector format to_hdf detect_selective_search mean set_index to_csv detect_windows read_csv add_argument ArgumentParser read NetParameter output_image_file rankdir Merge draw_net_to_file items list DESCRIPTOR batch_size str num_output get_pooling_types_dict add_edge get_edge_label list Dot get_layer_label values name choose_color_by_layertype Edge Node bottom append type layer add_node top shape BlobProto extend flat extend BlobProtoVector ParseFromString BlobProtoVector extend tostring shape Datum flat data len astype float32 tile zoom tuple resize fill empty array concatenate shape tile empty array LayerParameter list NetParameter _to_proto extend Counter OrderedDict values iteritems isinstance extend add getattr setattr items layers index set outputs _forward len items _backward layers inputs index set len items asarray extend copy next _batch iter forward values len items asarray backward extend next _batch zip_longest zip iter forward values len ascontiguousarray concatenate num iter zeros next range values len NamedTemporaryFile str close write data Pooling pool1 conv2 pool2 ip1 relu1 SoftmaxWithLoss Convolution NetSpec DummyData ip2 ReLU InnerProduct label conv1 Pooling SoftmaxWithLoss Convolution DummyData ReLU InnerProduct data NetSpec DummyData Silence data2 error search add group clear compile compile compile SetOutputFormat SetCountingStyle SetFilters _Filters startswith IsErrorSuppressedByNolint _ShouldPrintError write IncrementErrorCount replace append Match group find startswith endswith range error FindNextMultiLineCommentEnd RemoveMultiLineCommentsFromRange FindNextMultiLineCommentStart rstrip find range len FindEndOfExpressionInLine range len FindStartOfExpressionInLine error min search I range len FileInfo RepositoryName sep sub ParseNolintSuppressions error startswith split GetHeaderGuardCPPVariable enumerate error enumerate error len error replace count error find error find error find error find error Search error match InnermostClass replace error escape Match Search error group Search Check error lines Count End group Begin NumLines Match raw_lines range Search error match group error Match group pop group append Search pop group append Search elided replace CheckSpacingForFunctionCall rfind error len group min CloseExpression NumLines sub find CheckComment Match range Search lines_without_raw_strings error group starting_linenum Match range Search error rfind len group ReverseCloseExpression Search Match CloseExpression find error Match CloseExpression find elided error strip group FindEndOfExpressionInLine find Match range CloseExpression len error Match finditer normalize isinstance GetLineWidth int InnermostClass CheckCheck error CheckAltTokens CheckBraces CheckSpacing CheckSectionSpacing CheckEmptyBlockBody CheckAccess GetHeaderGuardCPPVariable lines_without_raw_strings _DropCommonSuffixes RepositoryName match split CheckNextIncludeOrder CanonicalizeAlphabeticalOrder FileInfo error search group SetLastHeader match _ClassifyInclude Match pop end search set append values M rstrip replace CheckCStyleCast error _GetTextInside CheckIncludeLine search group lstrip startswith Match ResetSection Search split rfind error group ReverseCloseExpression lstrip findall Match range Search ReplaceAll error Match Search endswith replace setdefault group search CleanseComments open list FilesBelongToSameModule error search copy sub NumLines FullName keys range error search CheckPosixThreading ParseNolintSuppressions CheckVlogArguments CheckMakePairUsesDeduction CheckCaffeDataLayerSetUp CheckLanguage CheckInvalidIncrement CheckCaffeRandom CheckForNonConstReference check_fn Update CheckForNonStandardConstructs CheckStyle raw_lines CheckForMultilineCommentsAndStrings CheckCaffeAlternatives CheckForFunctionLengths CleansedLines _NestingState CheckForBadCharacters CheckForNewlineAtEOF _IncludeState RemoveMultiLineComments CheckForCopyright ResetNolintSuppressions CheckForHeaderGuard NumLines CheckCompletedBlocks CheckForIncludeWhatYouUse range ProcessLine _FunctionState Error rstrip endswith len write ProcessFileData _SetVerboseLevel range split write exit join write exit _VerboseLevel int getopt _SetOutputFormat set _SetVerboseLevel PrintCategories _SetFilters _OutputFormat PrintUsage _SetCountingStyle split getreader ParseArguments ResetErrorCounts stderr exit verbose_level PrintErrorCounts StreamReaderWriter ProcessFile getwriter int time write flush load join index int rfind datetime split getctime year strip extract_datetime_from_line get_start_time total_seconds strip write get_log_created_year close extract_datetime_from_line open float get_log_created_year compile fix_initial_nan_learning_rate search group OrderedDict append float join basename write_csv print excel parse_log save_csv_files output_dir logfile_path | # Pavement crack detection: dataset and model The project is used to share our recent work on pavement crack detection. For the details of the work, the readers are refer to the paper "Feature Pyramid and Hierarchical Boosting Network for Pavement Crack Detection" (FPHB), T-ITS 2019. You can find the paper in https://www.researchgate.net/publication/330244656_Feature_Pyramid_and_Hierarchical_Boosting_Network_for_Pavement_Crack_Detection or https://arxiv.org/abs/1901.06340. The pavement crack datasets used in paper, crack detection results on each datasets, trained model, and crack annotation tool are stored in [Google Drive](https://drive.google.com/open?id=1y9SxmmFVh0xdQR-wdchUmnScuWMJ5_O-), [One Drive](https://tuprd-my.sharepoint.com/:u:/g/personal/tug13683_temple_edu/ESjezwsNLERMpvY85wOEKWkBQKY1A21M1rDhLID11pyRsg?e=ffr1zc), and [Daidu Yunpan](https://pan.baidu.com/s/1JwJO96BOtJ50MykBcYKknQ) extract code: jviq. **If you think this project is useful for you, feel free to leave a star. (^^)** # Installing 1. Install prerequisites for Caffe 2. Clone the repository ```shell git clone https://github.com/fyangneil/pavement-crack-detection.git | 2,143 |
fyumoto/EIF | ['anomaly detection'] | ['Extended Isolation Forest'] | setup.py version.py eif.py PathFactor c_factor iForest iTree Node all_branches read append right left | <a href="https://github.com/sahandha/eif/releases/tag/v1.0.2"> <img src="https://img.shields.io/badge/release-v1.0.2-blue.svg" alt="latest release" /></a><a href="https://pypi.org/project/eif/1.0.2/"><img src="https://img.shields.io/badge/pypi-v1.0.2-orange.svg" alt="pypi version"/></a> # Extended Isolation Forest This is a simple package implementation for the Extended Isolation Forest method. It is an improvement on the original algorithm Isolation Forest which is described (among other places) in this [paper](https://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/icdm08b.pdf) for detecting anomalies and outliers from a data point distribution. The original algorithm suffers from an inconsistency in producing anomaly scores due to slicing operations. Even though the slicing hyperplanes are selected at random, they are always parallel to the coordinate reference frame. The shortcoming can be seen in score maps as presented in the example notebooks in this repository. In order to improve the situation, we propose an extension which allows the hyperplanes to be taken at random angles. The way in which this is done gives rise to multiple levels of extension depending on the dimensionality of the problem. For an *N* dimensional dataset, Extended Isolation Forest has *N* levels of extension, with *0* being identical to the case of standard Isolation Forest, and *N-1* being the fully extended version. Here we provide the source code for the algorithm as well as documented example notebooks to help get started. Various visualizations are provided such as score distributions, score maps, aggregate slicing of the domain, and tree and whole forest visualizations. most examples are in 2D. We present one 3D example. However, the algorithm works readily with higher dimensional data. ## Installation pip install eif or directly from the repository pip install git+https://github.com/sahandha/eif.git ## Requirements | 2,144 |
gakkiri/EmbedMask | ['instance segmentation', 'semantic segmentation'] | ['EmbedMask: Embedding Coupling for One-stage Instance Segmentation'] | fcos/modeling/__init__.py fcos/modeling/backbone/vovnet.py fcos/data/builtin.py fcos/modeling/fcos/lovasz.py tools/convert_fcos_weight.py fcos/layers/ml_nms.py postprocessing.py fcos/modeling/fcos/__init__.py fcos/modeling/backbone/fpn.py fcos/modeling/fcos/fcos.py fcos/checkpoint/__init__.py fcos/config/config.py fcos/utils/measures.py tools/train_net.py fcos/modeling/fcos/fcos_outputs.py fcos/modeling/backbone/__init__.py fcos/__init__.py fcos/checkpoint/adet_checkpoint.py fcos/config/__init__.py fcos/layers/deform_conv.py fcos/utils/comm.py fcos/layers/__init__.py fcos/layers/conv_with_kaiming_uniform.py fcos/modeling/one_stage_detector.py tools/remove_optim_from_ckpt.py fcos/modeling/poolers.py tools/compute_flops.py fcos/config/defaults.py fcos/modeling/fcos/utils.py fcos/modeling/backbone/mobilenet.py fcos/data/__init__.py fcos/layers/iou_loss.py train_net.py detector_postprocess sem_seg_postprocess main setup Trainer AdetCheckpointer get_cfg register_all_coco conv_with_kaiming_uniform DFConv2d _NewEmptyTensorOp IOULoss ml_nms OneStageDetector TopPooler assign_boxes_to_levels_by_length _box_max_size LastLevelP6P7 build_fcos_resnet_fpn_backbone LastLevelP6 conv_1x1_bn InvertedResidual conv_bn build_mnv2_backbone MobileNetV2 _OSA_stage eSEModule conv1x1 VoVNet build_vovnet_backbone conv3x3 _OSA_module build_vovnet_fpn_backbone Hsigmoid build_fcos_vovnet_fpn_backbone FCOSHead FCOS Scale lovasz_grad LovaszHinge boxes_to_masks iou compute_mask_prob crop_by_box prepare_masks reduce_sum get_layer_info is_pruned is_leaf get_layer_param measure_model measure_layer get_num_gen main setup get_parser rename_resnet_param_names get_parser main setup Trainer pred_boxes has squeeze Instances scale proposal_boxes image_size clip expand merge_from_file config_file get_cfg merge_from_list default_setup opts freeze verify_results update test_with_TTA setup resume_or_load build_model test WEIGHTS ENABLED Trainer register_hooks eval_only is_main_process join list items register_coco_instances tensor batched_nms scores pred_classes tensor max clamp log2 floor epsilon cat FPN TOP_LEVELS MOBILENET LastLevelP6P7 build_resnet_backbone IN_FEATURES OUT_CHANNELS build_mnv2_backbone LastLevelP6 OUT_FEATURES MobileNetV2 OUT_FEATURES FPN IN_FEATURES OUT_CHANNELS build_vovnet_backbone FPN TOP_LEVELS LastLevelP6P7 build_vovnet_backbone IN_FEATURES OUT_CHANNELS LastLevelP6 cumsum sum len clamp min max exp sum new_ones expand new_zeros gt new_tensor shape append range len clamp size expand clamp expand all_reduce clone get_world_size mask str strip mul kernel_size reduce get_layer_param branch_2 conv out_channels numel groups relu padding size stride in_channels int norm get_layer_info condense_factor branch_1 modify_forward forward restore_forward format print measure_model zeros cuda add_argument ArgumentParser OrderedDict list keys replace | # EmbedMask Unofficial implementation for EmbedMask instance segmentation office: https://github.com/yinghdb/EmbedMask arxiv: https://arxiv.org/abs/1912.01954 ## Log #### 2020/6/3 |config|bbox|mask|weight| |-|:-:|-:|-:| |MS_R_50_2x.yaml|40.399|34.105|[google drive](https://drive.google.com/file/d/18p5s2NCZwbBNzZnUmfovF9RM1hzxlEX4/view?usp=sharing)| | 2,145 |
gangwg/smartgrids | ['time series'] | ['Real-time Power System State Estimation and Forecasting via Deep Neural Networks'] | utils.py alipolicy.py networkmpc.py DELETE-1.py customized_loss mpc preprocess_data cvx_fun construct_feed_dict sqrt array dot transpose diag T Minimize Problem Variable solve power dict update | # PSSE-via-DNNs A Keras implementation of our paper: L. Zhang, G. Wang, and G. B. Giannakis, “Real-time power system state estimation and forecasting via deep neural networks,” arXiv:1811.06146, Nov. 2018. [Online] available: https://arxiv.org/abs/1811.06146 If you find the code useful, please cite our paper. The data for 118- and 57-bus systems can be downloaded from https://drive.google.com/drive/folders/1pAquFM2PPiWtleehXLLCxjsOnpvtB4QU?usp=sharing. To train the model and obtain estimation performance, please put the aforementioned data in the root file, and run simple_test.py. To get plots, please run the get_plots.py. Please feel free to play with your own data. | 2,146 |
gaozhangyang/DGC | ['boundary detection'] | ['Clustering Based on Graph of Density Topology', 'Git: Clustering Based on Graph of Intensity Topology'] | utils/__init__.py dataloaders/__init__.py utils/measures.py git_cluster/detect_local_mode.py dataloaders/real_dataloader.py git_cluster/kde.py dataloaders/toy_dataloader.py dataloaders/mnist_loader.py git_cluster/git.py setup.py utils/plot_tools.py git_cluster/__init__.py git_cluster/topo_graph.py CleanInstall MNIST_DataLoader Real_DataLoader Toy_DataLoader LCluster GIT KDE_DIS TopoGraph wasserstein_distance ARI_calculator f1_score_calculator ACC_calculator match cover_calculator matchY measures_calculator Visualization term autoPlotly GIFPloter PaperGraph autoPlot zeros_like concatenate linprog reshape square sqrt append sum array range len list keys len int list sorted zeros_like astype set argsort match keys enumerate ARI_calculator normalized_mutual_info_score f1_score_calculator ACC_calculator set cover_calculator matchY DataFrame len show subplot list items seed reshape set_zticks axis shuffle set_visible scatter savefig figure xticks yticks Figure update_layout show pid format print getpid terminate | # GIT: Clustering Based on Graph of Intensity Topology This repository contains the implementation code for [paper](https://arxiv.org/abs/2110.01274):<br>__GIT: Clustering Based on Graph of Intensity Topology__<br> <!-- If you found this package useful, please cite: ``` @article{gao2021git, title={Git: Clustering Based on Graph of Intensity Topology}, author={Gao, Zhangyang and Lin, Haitao and Tan, Cheng and Wu, Lirong and Li, Stan and others}, journal={arXiv preprint arXiv:2110.01274}, year={2021} } | 2,147 |
garnier94/Concurrent_RNN | ['time series'] | ['Concurrent Neural Network : A model of competition between times series'] | concurrent_lstm/training.py concurrent_lstm/build_tensor.py concurrent_lstm/systematisation.py script/concurrent_training.py script/data_generation.py concurrent_lstm/__init__.py concurrent_lstm/LSTM_Models.py script/training_process_without luigi.py test_train_generation filter_by_level dataframe_to_torch adapt_data add_sum LSTM_Model non_concurrent_evaluation_model write_model concurrent_evaluation_model non_concurrent_training reference_prediction init_model sum_model_conc sum_model_non_conc concurrent_training_bis training_model predict Multi_comparaison Family_Data_Generation Build_Reference Build_Simple_Tensor sum filter_by_level shift copy drop append reshape range len get adapt_data print strptime min index get_level_values set dataframe_to_torch append sum max fillna days len get loss_function predict_non_concurrent_step print get model ones print shape loss_function reinitialize tensor range len close write open get int L1Loss Adam parameters float LSTM_Model get backward print zero_grad shuffle sum_model_non_conc copy loss_function predict_non_concurrent_step append step len get L1Loss get model sum_model_conc ones backward print zero_grad shuffle copy shape loss_function reinitialize append tensor step range concurrent_evaluation_model len init_model non_concurrent_training concurrent_training_bis Parameter Parameter Parameter Parameter | # Concurrent RNN This code has been improved in https://github.com/garnier94/Concurrent_Neural_Network. | 2,148 |
garrickbrazil/SDS-RCNN | ['pedestrian detection', 'semantic segmentation', 'autonomous driving'] | ['Illuminating Pedestrians via Simultaneous Detection & Segmentation'] | external/caffe/examples/web_demo/exifutil.py external/caffe/tools/extra/summarize.py external/caffe/python/caffe/io.py external/caffe/python/caffe/detector.py external/caffe/python/caffe/test/test_io.py external/caffe/scripts/copy_notebook.py external/caffe/python/caffe/pycaffe.py external/caffe/examples/web_demo/app.py external/caffe/tools/extra/resize_and_crop_images.py external/caffe/python/classify.py external/caffe/scripts/download_model_binary.py external/caffe/python/caffe/test/test_python_layer.py external/caffe/examples/pycaffe/layers/pascal_multilabel_datalayers.py external/caffe/examples/pycaffe/caffenet.py external/caffe/python/caffe/test/test_net_spec.py external/caffe/python/caffe/test/test_solver.py external/caffe/scripts/split_caffe_proto.py external/caffe/examples/pycaffe/tools.py external/caffe/python/caffe/net_spec.py external/caffe/python/draw_net.py external/caffe/tools/extra/extract_seconds.py external/caffe/python/caffe/test/test_coord_map.py external/caffe/python/caffe/classifier.py external/caffe/python/caffe/coord_map.py external/caffe/python/caffe/test/test_layer_type_list.py external/caffe/scripts/cpp_lint.py external/caffe/src/caffe/test/test_data/generate_sample_data.py external/caffe/examples/pycaffe/layers/pyloss.py external/caffe/python/caffe/test/test_net.py external/caffe/tools/extra/parse_log.py external/caffe/python/caffe/test/test_python_layer_with_param_str.py external/caffe/python/caffe/draw.py external/caffe/python/caffe/__init__.py external/caffe/python/detect.py external/caffe/examples/finetune_flickr_style/assemble_data.py external/caffe/python/train.py download_image make_net max_pool caffenet conv_relu fc_relu CaffeSolver SimpleTransformer print_info check_params PascalMultilabelDataLayerSync load_pascal_annotation BatchLoader EuclideanLossLayer start_tornado start_from_terminal embed_image_html classify_upload index allowed_file ImagenetClassifier classify_url open_oriented_im apply_orientation main main main parse_args train solve time Classifier coord_map UndefinedMapException conv_params coord_map_from_to AxisMismatchException inverse crop_params compose crop Detector get_edge_label draw_net get_layer_label get_pydot_graph choose_color_by_layertype get_pooling_types_dict draw_net_to_file Transformer blobproto_to_array datum_to_array array_to_blobproto array_to_datum resize_image arraylist_to_blobprotovector_str blobprotovector_str_to_arraylist load_image oversample Layers Function Parameters Top NetSpec assign_proto param_name_dict to_proto _Net_blobs _Net_forward_all _Net_set_input_arrays _Net_backward _Net_params _Net_forward _Net_outputs _Net_forward_backward_all _Net_blob_loss_weights _Net_batch _Net_get_id_name _Net_inputs _Net_layer_dict TestCoordMap coord_net_spec TestBlobProtoToArray TestArrayToDatum TestLayerTypeList TestLevels TestStages simple_net_file TestNet TestAllInOne lenet TestNetSpec silent_net anon_lenet exception_net_file parameter_net_file SimpleLayer phase_net_file TestPythonLayer ParameterLayer PhaseLayer python_net_file ExceptionLayer SimpleParamLayer TestLayerWithParam python_param_net_file TestSolver ParseNolintSuppressions CheckVlogArguments CheckSectionSpacing FindNextMultiLineCommentEnd ReplaceAll CheckForFunctionLengths _SetOutputFormat _IsTestFilename _VerboseLevel CheckBraces RemoveMultiLineComments ResetNolintSuppressions CheckForNonStandardConstructs _SetVerboseLevel PrintUsage _NestingState CheckIncludeLine CheckAccess _CppLintState Search CheckInvalidIncrement RemoveMultiLineCommentsFromRange CleansedLines CheckForBadCharacters UpdateIncludeState FindPreviousMatchingAngleBracket CheckEmptyBlockBody FindNextMultiLineCommentStart Match _NamespaceInfo CheckMakePairUsesDeduction CheckCheck IsBlankLine _SetFilters ProcessLine _FunctionState CheckPosixThreading GetLineWidth GetHeaderGuardCPPVariable IsCppString _IncludeState CheckSpacing _ClassInfo CheckForCopyright IsErrorSuppressedByNolint ProcessFileData CheckForMultilineCommentsAndStrings CloseExpression _PreprocessorInfo _OutputFormat CheckForIncludeWhatYouUse CheckSpacingForFunctionCall FindEndOfExpressionInLine FindNextMatchingAngleBracket _SetCountingStyle ProcessFile _IncludeError CleanseRawStrings CheckAltTokens CheckForNewlineAtEOF ParseArguments CheckForNonConstReference PrintCategories _Filters main FilesBelongToSameModule CheckCStyleCast FileInfo _BlockInfo CheckForHeaderGuard CheckCaffeDataLayerSetUp ReverseCloseExpression CleanseComments _DropCommonSuffixes _ClassifyInclude CheckStyle CheckCaffeAlternatives FindStartOfExpressionInLine _ShouldPrintError CheckComment Error _GetTextInside CheckLanguage CheckCaffeRandom GetPreviousNonBlankLine reporthook parse_readme_frontmatter model_checks_out valid_dirname get_start_time extract_seconds extract_datetime_from_line get_log_created_year write_csv parse_log fix_initial_nan_learning_rate save_csv_files main parse_args parse_line_for_net_output ResizeCropImagesMapper PILResizeCrop OpenCVResizeCrop print_table printed_len summarize_net main read_net format_param imread urlretrieve Convolution InnerProduct Data SoftmaxWithLoss LRN Accuracy max_pool InnerProduct conv_relu fc_relu Dropout join list getElementsByTagName get_data_from_tag csr_matrix dict zip zeros float range enumerate len print format get read info load_image classify_image StringIO join replace info secure_filename save filename open_oriented_im classify_image fromarray replace astype save resize StringIO items list listen HTTPServer format print start WSGIContainer update start_tornado add_option OptionParser debug port parse_args ImagenetClassifier forward run hasattr _getexif astype float32 tile apply_orientation open transpose model_def endswith ArgumentParser save mean_file channel_swap output_file dirname expanduser parse_args input_file predict Classifier set_mode_cpu load time isdir print add_argument set_mode_gpu pretrained_model gpu len DataFrame Detector format to_hdf detect_selective_search mean set_index to_csv detect_windows read_csv add_argument ArgumentParser read NetParameter output_image_file rankdir Merge TRAIN draw_net_to_file TEST Process str join init_log start append new_uid range log len before_backward layers display add_callback after_backward after_forward Timer append before_forward range len max_iter restore time set_solver_count set_solver_rank add_callback set_device set_multiprocess SGDSolver after_backward set_mode_gpu layer_wise_reduce step bcast NCCL len get params array get params array crop_params conv_params pop collect_bottoms add fn coord_map compose coord_map_from_to items list DESCRIPTOR batch_size str num_output get_pooling_types_dict add_edge get_edge_label list Dot exclude get_layer_label add_node values choose_color_by_layertype Edge Node bottom append type layer include top data array diff shape BlobProto extend flat extend BlobProtoVector ParseFromString BlobProtoVector extend tostring shape Datum flat data len astype float32 tile zoom tuple resize fill empty array concatenate shape tile empty array LayerParameter list NetParameter _to_proto extend Counter OrderedDict values iteritems hasattr isinstance extend add getattr setattr list OrderedDict _blobs _blob_names zip list _blob_loss_weights OrderedDict _blob_names zip _layer_names list layers OrderedDict zip OrderedDict list keys list keys iteritems layers index set outputs _forward len iteritems _backward layers inputs index set len iteritems asarray extend copy next _batch itervalues forward len iteritems asarray backward extend copy next _batch itervalues zip_longest zip forward len ascontiguousarray concatenate itervalues zeros next range len data Pooling pool Convolution NetSpec Deconvolution conv Input NamedTemporaryFile str close write data Pooling pool1 conv2 pool2 ip1 relu1 SoftmaxWithLoss Convolution NetSpec DummyData ip2 ReLU InnerProduct label conv1 Pooling SoftmaxWithLoss Convolution DummyData ReLU InnerProduct data NetSpec DummyData Silence data2 error search add group clear compile compile compile SetOutputFormat SetCountingStyle SetFilters _Filters startswith IsErrorSuppressedByNolint _ShouldPrintError write IncrementErrorCount replace append Match group find startswith endswith range error FindNextMultiLineCommentEnd RemoveMultiLineCommentsFromRange FindNextMultiLineCommentStart rstrip find range len FindEndOfExpressionInLine range len FindStartOfExpressionInLine error min search I range len FileInfo RepositoryName sep sub ParseNolintSuppressions error startswith split GetHeaderGuardCPPVariable enumerate error enumerate error len error replace count error find error find error find error find error Search error match InnermostClass replace error escape Match Search error group Search Check error lines Count End group Begin NumLines Match raw_lines range Search error match group error Match group pop group append Search pop group append Search elided replace CheckSpacingForFunctionCall rfind error len group min CloseExpression NumLines sub find CheckComment Match range Search lines_without_raw_strings error group starting_linenum Match range Search error rfind len group ReverseCloseExpression Search Match CloseExpression find error Match CloseExpression find elided error strip group FindEndOfExpressionInLine find Match range CloseExpression len error Match finditer normalize isinstance GetLineWidth int InnermostClass CheckCheck error CheckAltTokens CheckBraces CheckSpacing CheckSectionSpacing CheckEmptyBlockBody CheckAccess GetHeaderGuardCPPVariable lines_without_raw_strings _DropCommonSuffixes RepositoryName match split CheckNextIncludeOrder CanonicalizeAlphabeticalOrder FileInfo error search group SetLastHeader match _ClassifyInclude Match pop end search set append values M rstrip replace CheckCStyleCast error _GetTextInside CheckIncludeLine search group lstrip startswith Match ResetSection Search split rfind error group ReverseCloseExpression lstrip findall Match range Search ReplaceAll error Match Search endswith replace setdefault group search CleanseComments open list FilesBelongToSameModule error search copy sub NumLines FullName keys range error search CheckPosixThreading ParseNolintSuppressions CheckVlogArguments CheckMakePairUsesDeduction CheckCaffeDataLayerSetUp CheckLanguage CheckInvalidIncrement CheckCaffeRandom CheckForNonConstReference check_fn Update CheckForNonStandardConstructs CheckStyle raw_lines CheckForMultilineCommentsAndStrings CheckCaffeAlternatives CheckForFunctionLengths CleansedLines _NestingState CheckForBadCharacters CheckForNewlineAtEOF _IncludeState RemoveMultiLineComments CheckForCopyright ResetNolintSuppressions CheckForHeaderGuard NumLines CheckCompletedBlocks CheckForIncludeWhatYouUse range ProcessLine _FunctionState Error rstrip endswith len write ProcessFileData _SetVerboseLevel range split write exit join write exit _VerboseLevel int getopt _SetOutputFormat set _SetVerboseLevel PrintCategories _SetFilters _OutputFormat PrintUsage _SetCountingStyle split getreader ParseArguments ResetErrorCounts stderr exit verbose_level PrintErrorCounts StreamReaderWriter ProcessFile getwriter int time write flush load join index int rfind datetime split getctime year strip extract_datetime_from_line get_start_time total_seconds strip write get_log_created_year close extract_datetime_from_line open float get_log_created_year compile fix_initial_nan_learning_rate search group OrderedDict append float join basename write_csv print excel parse_log save_csv_files output_dir logfile_path NetParameter decay_mult format name lr_mult append print zip len get join str format convolution_param list setdefault param kernel_size map set top bottom append type module layer enumerate print_table filename summarize_net read_net | garrickbrazil/SDS-RCNN | 2,149 |
garyzhao/ME-ADA | ['data augmentation'] | ['Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness'] | common/utils.py models/alexnet.py main_mnist.py common/data_gen_MNIST.py model_cifar.py models/allconv.py main_cifar.py models/densenet.py models/lenet.py models/wideresnet.py model_mnist.py models/resnext.py main main ModelMEADA ModelADA ModelBaseline Denormalise ModelMEADA ModelADA ModelBaseline load_syn load_usps get_data_loaders load_mnist BatchImageGenerator load_mnist_m load_svhn fix_torch_seed unfold_label cross_entropy_loss shuffle_list sgd fix_all_seed shuffle_data fix_nn adam write_log fix_python_seed compute_accuracy num_flat_features mse_loss shuffle_list_with_ind entropy_loss AlexNet alexnet make_layers GELU AllConvNet densenet Transition DenseNet Bottleneck SingleLayer LeNet5 ResNeXtBottleneck resnext29 CifarResNeXt BasicBlock NetworkBlock WideResNet ModelADA ModelMEADA add_argument ModelBaseline ArgumentParser parse_args train MNIST numpy range append join SVHN append numpy range len fromarray join download_url zip append numpy _trans fromarray join reshape transpose numpy append loadmat _trans join reshape convert append numpy _trans parameters int min astype int8 append range len permutation arange len shuffle permutation arange len log_softmax sum softmax CrossEntropyLoss MSELoss SGD Adam str close write open print seed print manual_seed_all manual_seed print seed manual_seed_all manual_seed accuracy_score argmax update load_url AlexNet load_state_dict state_dict Conv2d DenseNet CifarResNeXt | # Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness (ME-ADA) This repository contains the Pytorch implementation of [Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness](https://arxiv.org/abs/2010.08001). If you find our code useful in your research, please cite: ``` @inproceedings{zhaoNIPS20maximum, author = {Zhao, Long and Liu, Ting and Peng, Xi and Metaxas, Dimitris}, title = {Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness}, booktitle = {Advances in Neural Information Processing Systems (NeurIPS)}, year = {2020} } ``` | 2,150 |
gbc8181/TISLF | ['image retrieval', 'video retrieval'] | ['Video Logo Retrieval based on local Features'] | oak/divide.py oak/transformation.py oak/compress.py oak/change.py oak/divide2.py resizeImg int size save float open | # Target Image Video Search Based on Local Features (TISLF) The codes are used for implementing TISLF for target image deterction in videos in: B. Guan, H. Ye, H. Liu and W. A. Sethares, "Video Logo Retrieval Based on Local Features," 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 2020, pp. 1396-1400, doi: 10.1109/ICIP40778.2020.9191208. # Required softwares Python2.7 C++14 Matlab 2016a+ # How to run this code 1. Clone this repository with `git clone https://github.com/gbc8181/TISLF.git`. 2. Install Parallel Computing Toolbox in Matlab. | 2,151 |
gberta/Visual-Spatial-Network | ['semantic segmentation'] | ['Unsupervised Learning of Important Objects from First-Person Videos'] | predict.py | # VSN Model This is a code repository for the paper "Unsupervised Learning of Important Objects from First-Person Videos" (https://arxiv.org/pdf/1611.05335.pdf). Our method predicts important-objects from first-person images in an unsupervised fashion. This work has been published in the ICCV 2017 Conference. Citation: @InProceedings{gberta_2017_ICCV_vsn, author = {Gedas Bertasius and Hyun Soo Park and Stella X. Yu and Jianbo Shi}, title = {Unsupervised Learning of Important Objects from First-Person Videos}, booktitle = {The IEEE International Conference on Computer Vision (ICCV)}, month = {October}, year = {2017} } | 2,152 |
gcambara/wav2letter | ['speech recognition'] | ['wav2letter++: The Fastest Open-source Speech Recognition System'] | recipes/models/seq2seq_tds/librispeech/prepare.py recipes/models/lexicon_free/utilities/compute_upper_ppl_kenlm.py bindings/python/examples/criterion_example.py bindings/python/wav2letter/criterion_torch.py recipes/models/conv_glu/librispeech/prepare.py recipes/models/lexicon_free/utilities/utils.py recipes/timit/data/utils.py recipes/data/wsj/prepare.py recipes/models/utilities/prepare_librispeech_official_lm.py bindings/python/wav2letter/decoder.py recipes/models/lexicon_free/utilities/compute_upper_ppl_convlm.py bindings/python/wav2letter/__init__.py recipes/models/utilities/convlm_serializer/save_pytorch_model.py bindings/python/wav2letter/feature.py bindings/python/examples/feature_example.py recipes/data/librispeech/utils.py recipes/models/conv_glu/wsj/prepare.py recipes/models/lexicon_free/librispeech/prepare.py bindings/python/wav2letter/common.py tutorials/1-librispeech_clean/prepare_data.py recipes/models/lexicon_free/utilities/compute_lower_ppl_kenlm.py recipes/data/wsj/utils.py bindings/python/examples/decoder_example.py bindings/python/setup.py recipes/data/librispeech/prepare.py bindings/python/wav2letter/criterion.py recipes/models/lexicon_free/utilities/compute_lower_ppl_convlm.py tutorials/1-librispeech_clean/prepare_lm.py recipes/models/lexicon_free/utilities/convlm_utils.py recipes/models/lexicon_free/wsj/prepare.py recipes/timit/data/prepare_data.py CMakeExtension CMakeBuild check_negative_env_flag check_env_flag load_emissions load_tn load_transitions assert_near read_struct load_data ASGLoss get_data_ptr_as_bytes run_direction run_backward run_get_workspace_size FCCFunction create_workspace run_forward check_tensor FACFunction get_cuda_stream_as_bytes parse_speakers_gender read_list transcript_to_list find_transcript_files ndx_to_samples convert_to_flac preprocess_word find_transcripts get_spelling compute_word_logprob compute_words_model_pdf_mass compute_ppl_lower_limit compute_denominator compute_word_logprob compute_words_model_pdf_mass compute_ppl_lower_limit compute_denominator compute_ppl_upper_limit_char_convlm compute_ppl_upper_limit_word_convlm compute_upper_limit_ppl_for_kenlm load_char_model_14B compute_new_state load_word_model decodeInputText load_char_model_20B build_token_index_correspondence convert_words_to_letters_asg_rep2 transform_asg_back prepare_vocabs_convlm transform_asg prepare_vocabs compare remap_words_with_same_spelling get_spelling convert save_model copytoflac write_sample findtranscriptfiles join abspath cuda_stream Size fn cpu_impl getattr device run_direction run_direction device run_get_workspace_size endswith join walk append dirname lower sub replace dict join walk setdefault sort join Transformer format remove duration strip system build set_output_format sub replace cuda max argsort sum compute_word_logprob compute_words_model_pdf_mass exp print strip set add append enumerate len list State BaseScore str append power State array BeginSentenceWrite BaseScore transform_asg_back split State exp format print cuda enumerate exp format print cuda enumerate dict items list load compute_new_state eval load_state_dict cuda load compute_new_state eval load_state_dict cuda load compute_new_state eval load_state_dict cuda append dict items list sorted defaultdict dict load Transformer set_output_format build copytoflac join replace endswith join walk append | # wav2letter++ [](https://circleci.com/gh/facebookresearch/wav2letter) wav2letter++ is a fast, open source speech processing toolkit from the Speech team at Facebook AI Research built to facilitate research in end-to-end models for speech recognition. It is written entirely in C++ and uses the [ArrayFire](https://github.com/arrayfire/arrayfire) tensor library and the [flashlight](https://github.com/facebookresearch/flashlight) machine learning library for maximum efficiency. Our approach is detailed in this [arXiv paper](https://arxiv.org/abs/1812.07625). This repository also contains pre-trained models and implementations for various ASR results including: - [Likhomanenko et al. (2019): Who Needs Words? Lexicon-free Speech Recognition](recipes/models/lexicon_free/README.md) - [Hannun et al. (2019): Sequence-to-Sequence Speech Recognition with Time-Depth Separable Convolutions](recipes/models/seq2seq_tds/README.md) The previous iteration of wav2letter (written in Lua) can be found in the [`wav2letter-lua`](https://github.com/facebookresearch/wav2letter/tree/wav2letter-lua) branch. ## Building wav2letter++ See [Building Instructions](docs/installation.md) for details. ## Full documentation | 2,153 |
gchers/exact-cp-optimization | ['anomaly detection'] | ['Exact Optimization of Conformal Predictors via Incremental and Decremental Learning'] | eli.py Eli job_file_name JobWrapper join | # Exact Optimization of Conformal Predictors via Incremental and Decremental Learning Implementation and evaluation of full CP (classifiers and regressors) optimized via incremental&decremental learning. Paper at: https://arxiv.org/abs/2102.03236. To appear: ICML '21. ## Code structure Please, refer to the notebook `exact-cp-optimization.ipynb` for implementation and evaluation details. If you have troubles viewing the notebook via Github, use: https://nbviewer.jupyter.org/github/gchers/exact-cp-optimization/blob/main/exact-cp-optimization.ipynb. The code in `eli.py` is a simple library used for carrying out the experiments. ## Citing ``` @misc{cherubin2021exact, | 2,154 |
gchrupala/imaginet | ['word embeddings'] | ['Learning language through pictures'] | imaginet.py layers.py convert.py example.py main.py utils.py main Combined Stacked RNN one_hot ForkedRNN load_workflow predict_h CosineDistance flatten NoScaler Dense CategoricalCrossEntropySwapped predict_h_simple padded SortedPaddedXYZ Workflow WrappedLayer MatrixMult Mult Direct Add test_linear extract_embeddings test project_words main train train_linear Cdist serialize deserialize load serialize dump open load deserialize open asarray arange shape repeat tile zeros function iterX extend output _predict_h2 append function iterX extend output _predict_h append list asarray tuple map shape max range len reshape sum seed random_seed extract_embeddings test_linear add_argument test project_words ArgumentParser parse_args train train_linear list Ridge dump write getDataProvider array CountVectorizer open dataset iterImageSentencePair fit_transform fit paraphrase open seed list cdist sum iterImageSentencePair predict asarray random_seed enumerate load stdout cosine_distance print argsort iterImages transform array Cdist RNN model dataset open list len tokenizer getDataProvider Adam iterImageSentencePair fit_transform dump Direct shuffle zip zero_shot ForkedRNN transform fit paraphrase dataset open seed list append sum iterImageSentencePair predict dump format hstack predict_h_simple random_seed enumerate load stdout cosine_distance print predict_h argsort iterImages transform array Cdist load list dump dict zip inverse_transform predict open load dump get_value dict open getattr layers __module__ settings append __name__ | # imaginet Imaginet implements several models which read sentences describing images and learn to build representations of these images grounded in the visual features of the corresponding images. These models were introduced in [Chrupała et al 2015](http://arxiv.org/abs/1506.03694). This repository contains the original implementation. New developments are taking place in [Reimaginet](https://github.com/gchrupala/reimaginet) Installation ============ Currently, installation is manual. The main prerequisite is | 2,155 |
gchrupala/symbolic-bias | ['image retrieval'] | ['Symbolic Inductive Bias for Visually Grounded Learning of Spoken Language', 'Symbolic inductive bias for visually grounded learning of spoken language'] | experiments/s2-t1-s2i1-s2t0-t2s0-t2i1-disjoint-g/run.py experiments/s2-t1-s2i1-s2t0-t2s0-t2i1-joint-g/run.py experiments/s2-t1-s2i1-s2t0-t2s0-t2i1-disjoint-f/run.py experiments/s2-t1-s2i2-s2t1-t2s1-t2i1-disjoint-e/run.py onion/attention.py onion/mlp.py vg/libri_provider.py experiments/s2-t1-s2i2-s2t0-t2s0-t2i1-disjoint-e/run.py vg/mfcc.py vg/bundle.py vg/places_provider.py experiments/s2-t1-s2i2-s2t0-t2s0-t2i1-joint-e/run.py vg/defn/three_way2.py onion/loss.py experiments/s2-t1-s2i2-s2t0-t2s0-t2i.-disjoint-f/run.py vg/flickr8k_provider.py experiments/s2-t1-s2i1-s2t1-t2s1-t2i1-joint-e/run.py experiments/s2-t1-s2i2-s2t0-t2s0-t2i.-joint-f/run.py onion/stacked_gru.py experiments/s2-t1-s2i2-s2t0-t2s0-t2i.-disjoint-e/run.py experiments/s2-t1-s2i1-s2t1-t2s1-t2i1-disjoint-e/run.py experiments/s2-t1-s2i2-s2t1-t2s1-t2i1-disjoint-f/run.py vg/scorer.py onion/grad_reverse.py vg/align.py vg/simple_data.py experiments/s2-t1-s2i1-s2t0-t2s0-t2i1-disjoint-e/run.py experiments/s2-t1-s2i1-s2t1-t2s1-t2i1-joint-f/run.py vg/activations.py vg/defn/three_way_dec.py experiments/s2-t1-s2i2-s2t1-t2s1-t2i1-disjoint-g/run.py onion/util.py onion/conv.py vg/defn/encoders.py experiments/s2-t1-s2i1-s2t0-t2s0-t2i1-joint-f/run.py experiments/s2-t1-s2i1-s2t1-t2s1-t2i1-disjoint-g/run.py preprocess.py experiments/s2-t1-s2i1-s2t0-t2s0-t2i1-joint-e/run.py experiments/s2-t.-s2i2-s2t.-t2s.-t2i.--e/run.py vg/defn/three_way_stack.py experiments/s2-t1-s2i2-s2t0-t2s0-t2i1-joint-f/run.py experiments/s2-t1-s2i2-s2t1-t2s1-t2i1-joint-f/run.py onion/init.py experiments/s2-t1-s2i2-s2t0-t2s0-t2i.-joint-g/run.py analysis/analyze.py vg/evaluate.py experiments/s2-t1-s2i2-s2t0-t2s0-t2i1-joint-g/run.py experiments/s2-t1-s2i2-s2t0-t2s0-t2i1-disjoint-g/run.py vg/defn/decoders.py experiments/s2-t.-s2i2-s2t.-t2s.-t2i.--f/run.py experiments/s2-t1-s2i2-s2t0-t2s0-t2i1-disjoint-f/run.py experiments/s2-t1-s2i2-s2t0-t2s0-t2i.-disjoint-g/run.py setup.py experiments/s2-t.-s2i2-s2t.-t2s.-t2i.--g/run.py experiments/s2-t1-s2i1-s2t1-t2s1-t2i1-joint-g/run.py experiments/s2-t1-s2i2-s2t1-t2s1-t2i1-joint-e/run.py experiments/s2-t1-s2i2-s2t1-t2s1-t2i1-joint-g/run.py onion/rhn.py experiments/s2-t1-s2i2-s2t0-t2s0-t2i.-joint-e/run.py experiments/s2-t1-s2i1-s2t1-t2s1-t2i1-disjoint-f/run.py synth decodemp3 extract_mfcc synthesize run_img_feats merge mfcc_h5 mfcc encode main get_imgs speak json2latex test_results valid_runs inv_results fa_data phones get_layer_states state_stack validscores phoneme_decoding_data rsa_results bestrec slices main testscores bestrun phoneme_decoding_results get_nets inout get_state_stack valid_results index layer_states melt make_json_happy load_best_run audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio audio softmax_time SelfAttention ConvTranspose1D Convolution1D GradReverse grad_reverse glorot_uniform get_fans orthogonal uniform xavier pearson cosine_matrix contrastive rsa triu MLP StackedRHN FixedZeros Residual Zeros Identity WithH0 Compose StackedRHNH0 RHNH0 RHN Linear StackedGRU make_linear autoassign grouper state_stack inout get_state_stack index from_audio phoneme_activations align from_audio phones slices on_progress load Bundle GenericBundle paraphrase_ranking ranking Provider getDataProvider touttid load_mfcc Provider getDataProvider extract_mfcc parse_map Provider getDataProvider load stringsim Scorer score encode_sentences rer encode_images encode_sentences_SpeechText encode_texts RSA main triu testing compressed midpoint arrange InputScaler SimpleData IdMapper phonemes by_speaker words transcription NoScaler characters scale_utterance outsidein Batcher randomized IdTable insideout vector_padder UncondDecoder CondDecoder BilinearAttention SimpleDecoder beam_search DecoderWithAttn SpeechEncoderBottomNoConv SpeechEncoderTop SpeechEncoderBottomStack l2normalize SpeechEncoderBottomBidi SpeechEncoderTopBidi SpeechEncoder GRUStack TextEncoderTop SpeechEncoderTopStack SpeechEncoderBottom ImageEncoder TextEncoderBottom TextEncoder valid_loss experiment TextImage encode_images_TextImage Net encode_texts step SpeechImage SpeechText experiment TextImage encode_images_TextImage SpeechTranscriber Net encode_texts step SpeechImage SpeechText valid_loss experiment TextImage encode_images_TextImage Net encode_texts step SpeechImage SpeechText add_argument add_parser func ArgumentParser parse_args set_defaults setLevel add_subparsers from_mp3 BytesIO export BytesIO write_to_fp format info parse_map open save open writer str list walk format extract_mfcc parse close info enumerate join int items print writerow output parse_map array split mfcc read output crop_size cnn tencrop save resize img_features dataset get_imgs to_latex test_results print phoneme_decoding_results read_json dumps valid_results rsa_results mean valid_runs append copy append format testscores melt list valid_runs RSA format Scorer encode_sentences getDataProvider dict encode_sentences_SpeechText string_sim load_best_run info cosine_similarity sentence_data setLevel array sim_images append format info score float LogisticRegression dict keys item transform train_test_split StandardScaler fit_transform append fit dict format cuda load_best_run list getDataProvider stride iterSentences loads fa_data get_layer_states info setLevel open list inout map zip append expand_dims array list inout map zip append array permute isnan vstack zip sum array phones load format info bestrun append validscores format dict range len keys argmax format loads zip append range open load format open append format bestrec isinstance svd astype reshape get_fans sqrt sqrt astype sqrt prod clamp diag view cosine_matrix norm mean norm sum Variable ones_like data list keys zip_longest zeros orthogonal Linear load device phoneme_activations append zip list zip debug list items info cdist set argsort intersection append enumerate len split join split load dump retrieval_para rsa_image Scorer model text output getDataProvider dict info speaker_id dataset cuda append open list max map groupby sorted grouper get tokenize enumerate dict items list append list dict append keys range len states SpeechEncoderBottom backward clip_grad_norm_ zero_grad train_cost parameters zero_grad train cuda save clip_grad_norm | # symbolic-bias Code for the paper: Symbolic inductive bias for visually grounded learning of spoken language. https://arxiv.org/abs/1812.09244, published at ACL 2019. ## Install Clone repo and set up and activate a virtual environment with python3 ``` cd symbolic-bias virtualenv -p python3 . ``` Install Python code (in development mode if you will be modifying something). ``` | 2,156 |
gcosne/OceanographyProject | ['time series'] | ['Coupling Oceanic Observation Systems to Study Mesoscale Ocean Dynamics'] | library.py func.py plot_one_profile discrete_colorbar cmap_discretize EM_with_init EM_latent_class_regression is_square BIC_calculation replace_nan init_EM_latent_class_regression stop_condition_EM BIC_calculation_orth latent_class_regression least_squares_covariance double_acp_target_feature list set_clim colorbar set_ticks ScalarMappable linspace set_ticklabels range set_array cmap concatenate linspace get_cmap enumerate show_loc grid GridSpec spectral pi bwr max values digitize subplot set_title viridis_r pcolormesh colorbar legend sin range set_label cmap_discretize plot int isel figure print reshape mean concatenate shape squeeze dot sqrt diag lstsq T size hstack close hist cov zeros array range fit_predict len EM_latent_class_regression init_EM_latent_class_regression zeros range T display print reshape pdf least_squares_covariance log shape stop_condition_EM append zeros sum array range len show subplot arange plot print explained_variance_ cumsum xlabel PCA ylabel title scatter figure StandardScaler fit_transform explained_variance_ratio_ PCA StandardScaler fit_transform BIC_calculation_orth EM_latent_class_regression init_EM_latent_class_regression log | # Coupling Oceanic Observation Systems to Study Mesoscale Ocean Dynamics The paper related to the code implemented can be found here : [arxiv](https://arxiv.org/abs/1910.08573). **Abstract** : Understanding local currents in the North Atlantic region of the ocean is a key part of modelling heat transfer and global climate patterns. Satellites provide a surface signature of the temperature of the ocean with a high horizontal resolution while in situ autonomous probes supply high vertical resolution, but horizontally sparse, knowledge of the ocean interior thermal structure. The objective of this paper is to develop a methodology to combine these complementary ocean observing systems measurements to obtain a three-dimensional time series of ocean temperatures with high horizontal and vertical resolution. Within an observation-driven framework, we investigate the extent to which mesoscale ocean dynamics in the North Atlantic region may be decomposed into a mixture of dynamical modes, characterized by different local regressions between Sea Surface Temperature (SST), Sea Level Anomalies (SLA) and Vertical Temperature fields. Ultimately we propose a Latent-class regression method to improve prediction of vertical ocean temperature. **LATEST VERSION OF THE NOTEBOOK can be found on colaboratory here** : [colab](https://colab.research.google.com/drive/1xX_XcPrx6cdHfIDTYd7K7BpJnu5LliDv) | 2,157 |
gdikov/calibrated-adversarial-learning | ['semantic segmentation'] | ['Calibrated Adversarial Refinement for Stochastic Semantic Segmentation'] | utils/networks.py utils/data.py Bifurcation Generator Discriminator MLP | # Calibrated Adversarial Learning This repository contains the code reproducing the toy regression example presented in Section 5.1. in the paper ["Calibrated Adversarial Refinement for Stochastic Semnatic Segmentation"](https://arxiv.org/abs/2006.13144) by Kassapis et al. Check out the [official repositoy](https://github.com/EliasKassapis/CARSSS) for reproducing all semantic segmentation experiments. ## Requirements The code has been tested with Python 3.7. The required python packages are listed in `requirements.txt`. ## Overview Two jupyter notebooks demonstrate the approach of using a calibration network and regularisation to improve conditional GAN sampling. Each is self-sufficient and uses utility code from the `utils` package which defines simple network builders and a data generator. Both notebook are structured as tutorials and contain minimal documentation. #### Part 1 The notebook `part_1.ipynb` shows visually the effect of the calibration regularisation on the generator, discriminator and the calibration networks in 1-dimensional bimodal regression setup. | Calibrated cGAN | Uncalibrated cGAN with mode collapse | | 2,158 |
geek-ai/Texygen | ['text generation'] | ['Texygen: A Benchmarking Platform for Text Generation Models'] | models/mle/MleGenerator.py utils/utils.py models/rankgan/RankganGenerator.py models/rankgan/RankganDiscriminator.py utils/text_process.py models/mle/MleDataLoader.py models/seqgan/SeqganGenerator.py utils/metrics/EmbSim.py models/leakgan/Leakgan.py utils/oracle/OracleSru.py models/rankgan/Rankgan.py utils/oracle/OracleLstm.py models/maligan_basic/MaliganGenerator.py models/maligan_basic/Maligan.py utils/metrics/Bleu.py models/leakgan/LeakganDataLoader.py models/maligan_basic/MaliganReward.py models/rankgan/RankganReward.py models/pg_bleu/PgbleuDataLoader.py models/gsgan/Gsgan.py models/maligan_basic/MailganDiscriminator.py models/pg_bleu/PgbleuGenerator.py models/leakgan/LeakganGenerator.py models/maligan_basic/MaliganDataLoader.py utils/metrics/Nll.py models/rankgan/RankganDataLoader.py models/pg_bleu/Pgbleu.py models/leakgan/LeakganReward.py models/seqgan/SeqganReward.py models/seqgan/SeqganDiscriminator.py models/gsgan/GsganDataLoader.py utils/metrics/UniqueGram.py models/seqgan/SeqganDataLoader.py utils/metrics/Metrics.py utils/metrics/DocEmbSim.py models/seqgan/Seqgan.py models/textGan_MMD/TextganDataLoader.py models/gsgan/GsganGenerator.py utils/oracle/OracleCfg.py main.py utils/metrics/SelfBleu.py models/gsgan/GsganDiscriminator.py utils/metrics/Cfg.py models/leakgan/LeakganDiscriminator.py models/textGan_MMD/TextganDiscriminator.py models/textGan_MMD/Textgan.py models/mle/Mle.py models/pg_bleu/PgbleuReward.py utils/oracle/OracleGru.py models/textGan_MMD/TextganGenerator.py models/Gan.py set_gan set_training parse_cmd Gan Gsgan DataLoader DisDataloader Discriminator Generator Leakgan generate_samples_gen pre_train_epoch_gen DataLoader DisDataloader cosine_similarity linear highway Discriminator Generator redistribution rescale Reward linear highway Discriminator Maligan DataLoader DisDataloader Generator Reward Mle DataLoader DisDataloader Generator Pgbleu DataLoader DisDataloader Generator Reward Rankgan DataLoader DisDataloader cosine_distance linear get_rank_score Discriminator highway Generator Reward Seqgan DataLoader DisDataloader linear highway Discriminator Generator Reward generate_samples TextganMmd DataLoader DisDataloader Discriminator Generator text_precess get_tokenlized chinese_process get_word_list code_to_text text_to_code get_dict generate_samples pre_train_epoch init_sess Bleu Cfg DocEmbSim EmbSim Metrics Nll SelfBleu UniqueGram OracleCfg OracleGru OracleLstm OracleGru dict Gan train_real print exit RED train_oracle train_cfg RESET set_gan getopt print exit dict train_oracle set_training gan_func pretrain_step num_batch append next_batch reset_pointer range int list join extend generate range multiply l2_normalize as_list shape redistribution zeros array range len while_loop shape int list join extend generate range len list map len list append list dict str list get_tokenlized get_word_list get_dict max len global_variables_initializer ConfigProto Session run pretrain_step num_batch append next_batch reset_pointer range | <h1><img src="docs/fig/texygen-01.png" width="250"></h1> Texygen is a benchmarking platform to support research on open-domain text generation models. Texygen has not only implemented a majority of text generation models, but also covered a set of metrics that evaluate the diversity, the quality and the consistency of the generated texts. The Texygen platform could help standardize the research on text generation and facilitate the sharing of fine-tuned open-source implementations among researchers for their work. As a consequence, this would help in improving the reproductivity and reliability of future research work in text generation. For more details, please refer to our SIGIR 2018 paper: [Texygen: A Benchmarking Platform for Text Generation Models](https://arxiv.org/abs/1802.01886) by Yaoming Zhu et al. Should you have any questions and enquiries, please feel free to contact Yaoming Zhu (ym-zhu [AT] outlook.com) and [Weinan Zhang](http://wnzhang.net) (wnzhang [AT] sjtu.edu.cn). ## Requirement We suggest you run the platform under Python 3.6+ with following libs: * **TensorFlow >= 1.5.0** * Numpy 1.12.1 * Scipy 0.19.0 * NLTK 3.2.3 | 2,159 |
geekSiddharth/DeepCoherence | ['sentence ordering'] | ['Text Coherence Analysis Based on Deep Neural Network'] | data/cui/data_preprocess.py train.py predict.py utills.py load_glove_embeddings.py model.py load_glove_embeddings get_model get_emdedding_layer embedded_cnn CoherenceModel load_cui_dataset f1 get_class_weights pad_all remove_non_ascii load_data get items list asarray zeros len append zip list load_glove_embeddings print Model embedded_cnn Input keys len pad_sequences recall precision max list Counter keys values strip readlines extend find append remove_non_ascii enumerate split | # DeepCoherence It is kind of based on [**Text Coherence Analysis Based on Deep Neural Network**](https://arxiv.org/abs/1710.07770) by *Baiyun Cui, Yingming Li, Yaqing Zhang, Zhongfei Zhang* Based on the origial data provied by [Baiyun Cui](mailto:[email protected]) ## Usage ## Download the code ``` git https://github.com/geekSiddharth/DeepCoherence.git cd DeepCoherence ``` | 2,160 |
geetickachauhan/relation-extraction | ['relation extraction'] | ['REflex: Flexible Framework for Relation Extraction in Multiple Domains'] | relation_extraction/data/utils.py relation_extraction/data/converters/converter_ddi.py relation_extraction/models/model.py relation_extraction/data/preprocess.py relation_extraction/models/model_utils.py scripts/main.py relation_extraction/models/bert_wrapper.py relation_extraction/models/pooling_wrapper.py scripts/main_utils.py scripts/parser.py relation_extraction/hyperparam_tuning/distributions.py relation_extraction/data/summarize.py relation_extraction/models/elmo_wrapper.py relation_extraction/data/converters/converter_i2b2.py relation_extraction/hyperparam_tuning/distributions_helpers.py relation_extraction/models/losses.py relation_extraction/data/converters/converter_semeval2010.py relation_extraction/data/data_exploration.py length_of_sentence get_entity_dict_df_pair_map get_entity_pair_dict_with_df length_of_context convert_pair_to_dict end_overlap sort_position_keys replace_digit_punctuation_stop_word convert_indexes_to_int remove_whitespace update_metadata_sentence list_to_int get_entity_positions_and_replacement_sentence beginning_and_end_overlap replace_ner overlap_index get_ner_dict get_entity_start_and_end replace_with_concept correct_entity_indexes_with_ner check_for_overlap ner_sort_position_keys get_new_sentence_with_entity_replacement fully_included beginning_overlap preprocess parse_position get_common_and_separate_entities get_ner_replacement_dictionary list_to_string entity_replacement_dict_with_entity_location get_entity_location_dict indiv_metric_comparison create_summary print_full_summary get_macro_micro_metric_comparison get_file_metrics read_precision_recall_f1 read_confusion_matrix_per_line get_sum_confusion_matrix read_accuracy_per_line create_metrics_macro_micro_df create_metrics_indiv_relations_df get_accuracy_difference generate_pretty_summary_confusion_matrix generate_confused_with_string get_confusion_matrix_as_df batch_iter graph_file_reader get_only_number pad_elmo_embedding pad_bert_embedding relative_distance split_data_cut_sentence write_bert_tokens_without_word_pieces per_sentence_replacement_ddi get_bert_CLS_embeddings build_dict pred_writer convert_labels test_pred_writer generate_feature_map_without_word_piece load_embedding stringify_tokenized pos load_embedding_senna average_over_token_embedding get_elmo_embeddings graph_reader replace_by_drug_ddi get_bert_token_embeddings sentence_replace load_data argument_to_list get_only_word vectorize Dataset create_positions_dict tag_sentence read_dataframe write_into_txt sort_position_keys get_entity_start_and_end get_other_entities remove_whitespace write_dataframe get_entity_positions_and_replacement_dictionary get_entity_dict check_equality_of_written_and_read_df get_entity_replacement_dictionary flatten_list_of_tuples parse_position combine get_dataset_dataframe tokenize get_concept_subparts write_into_txt get_artificial_relation_pair get_relation_pair_by_linenum get_dataset_dataframe_classification extract_concept_type_from_string append_existing_relations assign_e1_e2_relation get_line_number_and_word_number get_dataset_dataframe_extraction write_dataframe check_equality_of_written_and_read_df get_entity_replacement_dictionary relation_exist_in_pair_list get_concept_dictionary extract_concept_from_string read_dataframe get_concepts_by_linenum append_non_existing_relations extract_relation_from_string get_filename_with_extension get_filename_without_extension read_rel_line combine get_dataset_dataframe read_dataframe write_into_txt get_entity_start_and_end get_original_sentence remove_whitespace write_dataframe check_equality_of_written_and_read_df get_entity_replacement_dictionary get_dataset_dataframe tokenize to_rv DeltaDistribution DictDistribution make_rvs Censored MixtureDistribution build_mult_layer_fcr Coin CategoricalRV Distribution TransformedRV dict_fcr MarkovianGenerativeProcess sample_dict ListRV SubsetDistribution rv_int bert_wrapper elmo_wrapper ranking_loss CRCNN run_epoch accuracy prediction pooling_wrapper main res init preprocess_data_noncrossvalidated get_data read_macro_f1_from_result_file_ddi get_maximum_entity_and_sentence_length test_writer get_word_dict log_info read_macro_f1_from_result_file_semeval read_micro_f1_from_result_file_i2b2 dump_csv openFileAsList output_folder_creation max_sent_len get_current_time_in_miliseconds stack_data perform_assertions get_eval_column max_length_all_data max_ent_len test_writer_for_perl_evaluation create_folder_if_not_exists get_current_date evaluate set_hyperparams get_current_time_in_seconds get_results_dict get_config list apply print most_common get_entity_pair_dict_with_df Counter apply int abs split difference list intersection set update_dict_with_indexes list keys split sorted list keys get_common_and_separate_entities sort_position_keys len remove_whitespace list_to_string upper entity_replacement_dict_with_entity_location parse_position range split update_dict_with_indexes get_common_and_separate_entities sort_position_keys remove_whitespace list_to_string get_entity_location_dict parse_position range len load vocab is_punct like_num tagger get_new_sentence_with_entity_replacement is_stop len parser i Doc append range split len range enumerate intersection set intersection expand list end_overlap len beginning_overlap parse_position keys range beginning_and_end_overlap update_dict_with_entity sorted list keys vocab str label_ tagger end entity start parser Doc ents append ner_sort_position_keys get_common_and_separate_entities print get_ner_dict get_ner_replacement_dictionary convert_indexes_to_int len extend list_to_string remove_whitespace correct_entity_indexes_with_ner parse_position range split append range index endswith get_entity_start_and_end append pop list_to_string metadata tagged_sentence get_entity_positions_and_replacement_sentence split drop read_dataframe apply join search startswith append float split float startswith match groups DataFrame Index int range index len iterrows append DataFrame Index generate_confused_with_string DataFrame Index DataFrame Index print get_file_metrics len create_metrics_macro_micro_df create_metrics_indiv_relations_df sum generate_pretty_summary_confusion_matrix get_confusion_matrix_as_df print tolist ttest_rel print print indiv_metric_comparison create_summary res get_macro_micro_metric_comparison get_accuracy_difference sentence_replace range len append per_sentence_replacement_ddi zip most_common Counter enumerate list std info len readlines close embedding_vocab uniform split zeros keys embedding_file open readline list info readlines close map uniform split open std embedding_file len pos min amin zeros max range amax append zeros range len print append zeros array range len list format max_e1_len print pad_elmo_embedding max_len pad_bert_embedding max_e2_len relative_distance assign_splits zip append zeros max range enumerate len int permutation arange min ceil float array range len print print get str File append array range len OrderedDict list mean append average_over_token_embedding extend append range len append int len split zeros sum split open print graph_file_reader load text nlp append getAttribute getElementsByTagName split list intersection create_positions_dict sort_position_keys remove_whitespace parse_position range len str append get_entity_start_and_end get_entity_replacement_dictionary join getElementsByTagName parse tag_sentence glob print get_other_entities get_entity_positions_and_replacement_dictionary tqdm get_entity_dict append getAttribute DataFrame tokenize to_csv read_csv apply equals range len print empty basename join extract_concept_from_string split list items get_concept_subparts get_line_number_and_word_number print extract_relation_from_string strip glob join DataFrame tqdm str list get_line_number_and_word_number append keys str list get_line_number_and_word_number print append list extend relation_exist_in_pair_list append keys range enumerate len remove strip get_entity_replacement_dictionary read_rel_line append print get_line_number_and_word_number list get_line_number_and_word_number strip get_entity_replacement_dictionary append keys assign_e1_e2_relation split glob join DataFrame tqdm replace zeros argmax all range astype int8 time global_step info inputs accuracy prediction extend vstack add_summary run basicConfig get_word_dict log_info load_embedding_senna save_path print low_freq_thresh get_data load_embedding create_folder_if_not_exists early_stop vectorize get_maximum_entity_and_sentence_length len join time dump output_folder batch_size fold output_folder_creation init open int _sample round num_epoches join remove final_result_folder len get_eval_column extend index range to_csv output_dir read_csv create_folder_if_not_exists DataFrame exists drop max_sent_len max_ent_len max_length_all_data max build_dict print id info preprocess_data_noncrossvalidated tuple test_text_dataset_path fold open seed str list transpose openFileAsList append get_data_for_fold preprocessing_type range early_stop_size bert_function get_elmo_embeddings zip border_size sample get_bert_token_embeddings int get_train_dev_data_for_fold res get_bert_CLS_embeddings train_text_dataset_path len join str format result_folder output_folder get_results_dict print fold id tensorboard_folder output_dir append get_current_time_in_miliseconds test_answer_file create_folder_if_not_exists capitalize join split function split_data_cut_sentence now makedirs print replace save replace replace close startswith float open replace close startswith float open replace close startswith float open format read_macro_f1_from_result_file_semeval read_micro_f1_from_result_file_i2b2 system test_writer_for_perl_evaluation read_macro_f1_from_result_file_ddi use_test batch_size use_bert_tokens id m_plus dataset seed l2_reg_lambda num_filters lr_boundaries use_bert_CLS pos_embed_num preprocessing_type sgd_momentum early_stop_size num_epoches use_piecewise_pool patience m_minus momentum embedding_vocab copy border_size gamma cross_validate_report time pos_embed_size filter_sizes keep_prob cross_validate use_elmo pickle_seed lr_values early_stop embedding_file | # REflex: Flexible Framework for Relation Extraction in Multiple Domains  Paper: http://arxiv.org/abs/1906.08318 REflex is a unifying framework for Relation Extraction, applied on 3 highly used datasets (from the general, biomedical and clinical domains), with the ability to be extendable to new datasets. REflex has experimental as well as design goals. The experimental goals are in identification of sources of variability in results for the 3 datasets and provide the field with a strong baseline model to compare against for future improvements. The design goals are in identification of best practices for relation extraction and to be a guide for approaching new datasets. In order to replicate experiments for this work, generate the data beyond the pre-processing stage by going into the notebooks/ folder and following the README.md instructions there. Note: default hyperparameters are listed in scripts/parser.py The hierarchy of this code is organized as follows: 1. relation_extraction stores the main components of the framework, including converters, pre-processing module and models 2. eval/ contains the evaluation scripts used to evaluate the model 3. scripts/ which contains the scripts to run the model | 2,161 |
geffy/tffm | ['link prediction'] | ['Higher-Order Factorization Machines'] | tffm/utils.py test.py tffm/core.py tffm/__init__.py setup.py tffm/models.py tffm/base.py main read TestFM TFFMBaseModel batcher batch_to_feeddict TFFMCore TFFMRegressor TFFMClassifier get_shorter_decompositions powers_and_coefs pow_wrapper sort_topologically loss_logistic matmul_wrapper sigmoid count_nonzero_wrapper initial_coefficient loss_mse setup min range int64 astype float32 tocoo int list defaultdict ones tuple sort range unique append zeros sum array combinations_with_replacement enumerate len walk_depth_first list defaultdict takewhile sum unique factorial get_shorter_decompositions items list defaultdict sort_topologically ones unique append range len transpose exp log add | This is a TensorFlow implementation of an arbitrary order (>=2) Factorization Machine based on paper [Factorization Machines with libFM](http://dl.acm.org/citation.cfm?doid=2168752.2168771). It supports: * dense and sparse inputs * different (gradient-based) optimization methods * classification/regression via different loss functions (logistic and mse implemented) * logging via TensorBoard The inference time is linear with respect to the number of features. Tested on Python3.5, but should work on Python2.7 This implementation is quite similar to the one described in Blondel's et al. paper [https://arxiv.org/abs/1607.07195], but was developed independently and prior to the first appearance of the paper. # Dependencies | 2,162 |
gengchenmai/arcgis-online-search-engine | ['information retrieval'] | ['Semantically-Enriched Search Engine for Geoportals: A Case Study with ArcGIS Online'] | server/data/w2v_text.py server/models/nltk_preprocess_searchText.py server/models/w2d_tf_idf_simple.py w2vModel2TXTFile preprocessText textLemmatization readJSONData preprocessText rewriteExistingFile simpleSemanticSearch textLemmatization load save_word2vec_format join sub word_tokenize lemmatize len WordNetLemmatizer lower range split close write open load len print open BeautifulSoup lower get_text load most_similar preprocessText textLemmatization load_word2vec_format zeros split | # Semantically-Enriched Search Engine for Geoportals: A Case Study with ArcGIS Online Code for our ArcGIS Online semantically-enriched search engine presented in [our AGILE 2020 paper](http://www.geog.ucsb.edu/~gengchen_mai/). ## Search Engine Architecture <p align="center"> <img src="github-image/arcgis_engine.png" alt="argis_engine" width="1000" /> </p> ## Search Engine Interface <p align="center"> <img src="github-image/search_engine.png" alt="search_engine" width="1000" /> </p> | 2,163 |
gentlemanman/fcrn_pytorch | ['depth estimation'] | ['Deeper Depth Prediction with Fully Convolutional Residual Networks'] | train.py fcrn.py test.py weights.py loader.py utils.py UpProject FCRN Bottleneck test_loader NyuDepthLoader main loss_huber loss_mse load_split validate load_weights show print size astype float32 imshow DataLoader permute NyuDepthLoader load_split model zero_grad DataLoader save cuda str Adam FCRN load_state_dict range load_split format eval load_weights item loss_huber type float NyuDepthLoader load backward print Variable parameters isfile loss_fn train step int readline getcwd len close append open print float eval print permute item type state_dict | # fcrn_pytorch Deeper Depth Prediction with Fully Convolutional Residual Networks(2016 IEEE 3D Vision)的pytorch实现 论文:https://arxiv.org/pdf/1606.00373.pdf 主要参考:官方源码https://github.com/iro-cp/FCRN-DepthPrediction 前人实现https://github.com/XPFly1989/FCRN >fcrn_pytorch: 文件结构 >>data:待处理的数据 >>>testIdxs.txt trainIdxs.txt nyu_depth_v2_labeled >>model:保存模型 >>>NYU_ResNet-UpProj.npy model_300.pth | 2,164 |
geoai-lab/EUPEG | ['information retrieval'] | ['Are We There Yet? Evaluating State-of-the-Art Neural Network based Geoparsers Using EUPEG as a Benchmarking Platform'] | dependency/SpaCyNER/main.py dependency/CamCoder/geoparse.py main geoparse main load km cursor text2mapvec encode format predict index_to_coord nlp ents end_char pad_list get_coordinates append load str load_model open label_ start_char print strip dumps nlp append ents end_char | # An Extensible and Unified Platform for Evaluating Geoparsers #### Overall description Geoparsers are useful tools that extract structured geographic information from unstructured texts, thereby enabling spatial analysis on textual data. While a number of geoparsers have been developed, they were tested on different datasets using different metrics. Therefore, it is difficult to directly compare the performances of existing geoparsers. Here, we present EUPEG: An Extensible and Unified Platform for Evaluating Geoparsers. EUPEG is an open source and Web based benchmarking platform which hosts a majority of open corpora, geoparsers, and performance metrics reported in the literature. A newly developed geoparser can be connected to EUPEG and compared with other geoparsers based on the hosted datasets and new datasets uploaded. The main objective of EUPEG is to reduce the time and efforts that researchers have to spend in preparing datasets and baselines, thereby facilitating effective and efficient comparisons of geoparsers. #### Resources This repository contains the source code of EUPEG as well as the corpora under permitted licenses. #### Repository organization The whole repository is organized as a Maven Java Web Application under the "/project" folder: * pom.xml file contains information about the EUPEG as well as its configuration details used by Maven to build the project; * The file "/src/main/webapp/index.html" contains the HTML home page of EUPEG; * The folder "/src/main/webapp/js" contains the JavaScript code for implementing the user side functions: selecting corpora, geoparsers, and metrics, sending request to the server, and visualizing the results; | 2,165 |
geopi1/DeepMRI | ['mri reconstruction'] | ['Scan‐specific robust artificial‐neural‐networks for k‐space interpolation (RAKI) reconstruction: Database‐free deep learning for fast imaging'] | save_raw_data_to_pickle.py Network.py ismrmrd/parallel_imaging_demo.py ismrmrd/ismrmrdtools/transform.py ismrmrd/ismrmrdtools/ndarray_io.py ismrmrd/ismrmrdtools/show.py ismrmrd/doc/conf.py ismrmrd/recon_multi_reps.py ismrmrd/ismrmrdtools/sense.py ismrmrd/ismrmrdtools/grappa.py ismrmrd/setup.py utils.py ismrmrd/ismrmrdtools/imageviewer.py ismrmrd/ismrmrdtools/coils.py ismrmrd/ismrmrdtools/__init__.py ismrmrd/recon_ismrmrd_dataset.py ismrmrd/csm_estimation_demo.py main.py ismrmrd/ismrmrdtools/simulation.py ismrmrd/generate_cartesian_shepp_logan_dataset.py data_manager.py SpatialDataHandler RAKIDataHandler RAKINetwork SpatialNetwork main bcolors read_json_with_line_comments startup visualize_results main create apply_prewhitening smooth calculate_prewhitening calculate_csm_inati_iter calculate_csm_walsh calculate_grappa_unmixing _pad_kernel estimate_convolution_kernel main read_ismrmrd_image_series WindowLevelMouse ImageViewer write_ndarray read_ndarray _calculate_sense_unmixing_1d calculate_sense_unmixing imshow phantom _mod_shepp_logan generate_birdcage_sensitivities sample_data _shepp_logan _select_phantom transform_kspace_to_image transform_image_to_kspace data y receiverChannels kspace_encode_step_1 z sorted list CreateFromDocument read_acquisition range read_xml_header slice contrast number_of_acquisitions join repetition print maximum kspace_encode_step_2 tqdm zeros Dataset x join format print endswith copy read_json_with_line_comments listdir makedirs data arange R ssim abs max subplot list append_axes make_axes_locatable colorbar imshow title log10 savefig gca range gcf ACS_size tight_layout mean eval equalize_hist set_size_inches print min figure subsampled_data phantom arange randn limitType Acquisition toxml resize round clearAllFlags fieldOfView_mm generate_birdcage_sensitivities ACQ_IS_NOISE_MEASUREMENT encoding ACQ_LAST_IN_ENCODE_STEP1 cartesian setFlag ACQ_FIRST_IN_ENCODE_STEP1 pad append range matrixSize write_xml_header ACQ_FIRST_IN_SLICE close tile acquisitionSystemInformationType transform_image_to_kspace ACQ_LAST_IN_REPETITION ACQ_FIRST_IN_REPETITION ACQ_LAST_IN_SLICE ismrmrdHeader encodingSpaceType experimentalConditionsType append_acquisition encodingLimitsType Dataset create oversampling noise_level add_argument acceleration output repetitions coils ArgumentParser parse_args set_defaults matrix_size reshape asmatrix inv sqrt H cholesky float shape norm smooth dot zeros sum range conj norm asarray eps format isinstance concatenate print abs smooth copy mean sqrt sum range conj shape uniform_filter real zeros imag hamming asmatrix ifftshift calculate_csm_walsh max abs ones squeeze shape _pad_kernel sum range asarray fftshift astype sqrt ifftn tile nonzero T reshape argwhere estimate_convolution_kernel zeros conj svd print asmatrix reshape dot pinv argwhere H eye zeros abs max range zeros asarray shape File warn array zeros max range show squeeze ismrmrd_group ImageViewer ismrmrd_file time_dimension read_ismrmrd_image_series dtype str write ndim tobytes zeros range open dtype read itemsize endswith complex128 float64 tuple reshape float32 complex64 zeros frombuffer prod range open _calculate_sense_unmixing_1d abs squeeze complex64 shape sqrt zeros sum range T range pinv trace H zeros sum diag show set_cmap set_title reshape set_axis_off add_subplot extend colorbar connect dict figure append RectangleSelector range astype shape transform_image_to_kspace tile zeros exp arctan2 abs squeeze cos pi sqrt tile sin zeros float sum range cos pi _select_phantom sin zeros _mod_shepp_logan _shepp_logan list fftshift ndim ifftn ifftshift range list fftshift ndim ifftshift fftn range | # DeepMRI Pytorch implementation of RAKI paper with mild changes and optimizations [1] ## Getting Started Clone the Repo: ```bash git clone https://github.com/geopi1/DeepMRI.git ``` ### Datasets Download the Datasets: [link to mridata](http://mridata.org/list) | 2,166 |
georgelamb19/chempropBayes | ['molecular property prediction'] | ['Bayesian Graph Neural Networks for Molecular Property Prediction'] | chemprop/train/__init__.py chemprop/features/utils.py chemprop/bayes/swag.py scripts/pdts/pdts_dropR_t0.py chemprop/models/model_bbp.py scripts/gp.py chemprop/utils.py scripts/dropA.py scripts/pdts/pdts_sgld_g0.py chemprop/train/bayes_tr/sgld_tstr.py chemprop/bayes/gp.py chemprop/models/mpn_dun.py chemprop/train/evaluate.py chemprop/train/run_training.py scripts/pdts/pdts_bbp_t0.py chemprop/data/data.py scripts/pdts/pdts_swag_t0.py scripts/post/new_noise.py chemprop/data/__init__.py chemprop/bayes/bbp.py chemprop/train/predict.py chemprop/train/bayes_tr/swag_tstr.py chemprop/models/model_dun.py scripts/bbp.py scripts/dropR.py chemprop/models/__init__.py chemprop/train/cross_validate.py scripts/pdts/pdts_dropR_g0.py chemprop/train/pdts.py chemprop/nn_utils.py chemprop/train/save_test_data.py chemprop/data/scaffold.py scripts/pdts/pdts_bbp_g0.py chemprop/__init__.py scripts/map.py scripts/pdts/pdts_dropA_t0.py chemprop/args.py chemprop/features/__init__.py scripts/swag.py scripts/pdts/pdts_gp_g0.py chemprop/bayes/sgld.py chemprop/train/bayes_tr/bbp_tr.py chemprop/models/model.py chemprop/train/bayes_tr/gp_tr.py scripts/bayesHyp.py scripts/pdts/pdts_dropA_g0.py chemprop/models/mpn.py scripts/pdts/pdts_sgld_t0.py chemprop/data/scaler.py chemprop/train/make_predictions.py chemprop/bayes/dun.py scripts/pdts/pdts_gp_t0.py chemprop/bayes/__init__.py scripts/dun.py chemprop/train/new_noise.py chemprop/features/featurization.py chemprop/train/train.py chemprop/train/bayes_tr/sgld_tr.py scripts/sgld.py setup.py chemprop/train/bayes_tr/swag_tr.py chemprop/features/features_generators.py chemprop/train/bayes_tr/__init__.py chemprop/data/utils.py chemprop/bayes_utils.py chemprop/train/bayes_tr/dun_tr.py scripts/pdts/pdts_swag_g0.py scripts/post/save_test_data.py scripts/pdts/pdts_map_g0.py HyperoptArgs InterpretArgs get_checkpoint_paths CommonArgs SklearnPredictArgs SklearnTrainArgs TrainArgs PredictArgs scheduler_const enable_dropout unflatten_like flatten neg_log_like initialize_weights index_select_ND NoamLR compute_molecule_vectors get_activation_function compute_pnorm compute_gnorm param_count build_lr_scheduler get_loss_func prc_auc load_checkpoint build_optimizer load_args accuracy mse save_smiles_splits create_logger get_metric_func save_checkpoint rmse load_task_names load_scalers makedirs BayesLinear KLD_cost neg_log_likeDUN predict_MCdepth initial_inducing_points DKLMoleculeModel predict_std_gp GPLayer SGLD swag_parameters SWAG MoleculeDataset MoleculeDataLoader MoleculeSampler MoleculeDatapoint log_scaffold_stats scaffold_split generate_scaffold scaffold_to_smiles StandardScaler get_smiles get_header get_num_tasks get_data_from_smiles get_data split_data get_class_sizes get_task_names filter_invalid_smiles validate_data rdkit_2d_features_generator rdkit_2d_normalized_features_generator get_features_generator morgan_binary_features_generator get_available_features_generators register_features_generator morgan_counts_features_generator get_bond_fdim MolGraph bond_features atom_features mol2graph get_atom_fdim BatchMolGraph onek_encoding_unk save_features load_features MoleculeModel MoleculeModelBBP MoleculeModelDUN MPNEncoder MPN MPNDUN MPNEncoderDUN cross_validate evaluate_predictions evaluate make_predictions new_noise pdts predict run_training save_test_data train train_bbp train_dun train_gp train_sgld train_sgld_pdts train_swag train_swag_pdts endswith join walk append train append shape view numel pi sqrt tensor sum log len index_select size view xavier_normal_ parameters constant_ extend training tqdm MoleculeDataLoader eval train numpy dirname Namespace save load from_dict update MoleculeModel debug device TrainArgs load_state_dict cuda info vars to keys state_dict load vars from_dict TrainArgs precision_recall_curve param_groups len join FileHandler getLogger addHandler StreamHandler DEBUG setLevel INFO makedirs append sorted smiles makedirs sum reshape pi sqrt tensor sum log len tolist extend eval inverse_transform numpy print extend feature_extractor repeat num_tasks batch_graph len stds tolist extend sqrt eval numpy array data pop list register_buffer insert zero_ append keys MurckoScaffoldSmiles defaultdict generate_scaffold tqdm add enumerate seed list sorted Random values debug log_scaffold_stats shuffle mols append scaffold_to_smiles count_nonzero debug nanmean append array get_header concatenate debug set load_features append filter_invalid_smiles len MoleculeDataset debug filter_invalid_smiles len int Random tuple log_scaffold_stats shuffle extend append range len count_nonzero targets append range len pop MolFromSmiles get_header tqdm add set unique float zeros GetMorganFingerprintAsBitVect ConvertToNumpyArray zeros GetHashedMorganFingerprint ConvertToNumpyArray RDKit2D RDKit2DNormalized len GetHybridization int GetTotalDegree GetAtomicNum GetFormalCharge GetChiralTag GetTotalNumHs onek_encoding_unk GetBondType savez_compressed load seed join show_individual_scores zip run_training num_folds nanmean info append save_dir array range enumerate makedirs metric_func info append float range len evaluate_predictions predict targets get_data list tolist normalize_features range predict get get_data_from_smiles MoleculeDataLoader zip features_scaling load_scalers setattr enumerate items print load_checkpoint makedirs load_args tqdm MoleculeDataset preds_path zeros checkpoint_paths len children checkpoint_path MoleculeModel rho_min_bbp get_data get_metric_func rho_min_dun features_size create_log_cat MoleculeModelBBP ones rho_max_bbp tolist len ensemble_start_idx samples normalize_features results_dir range predict init_rho debug set_targets MoleculeDataLoader initial_inducing_points num_tasks GPLayer features_scaling rho_max_dun join MoleculeModelDUN isinstance savez print load_checkpoint ensemble_size makedirs num_workers split_data zeros array fit children MoleculeModel checkpoint_path rho_min_bbp pdts_batches savez get_data get_metric_func gp_layer device features_size save_dir cuda log str MoleculeModelBBP swag ones rho_max_bbp tolist Adam train_data_size smiles samples normalize_features results_dir to append range predict DKLMoleculeModel init_rho VariationalELBO hstack shuffle targets set_targets MoleculeDataLoader split_sizes epochs_init initial_inducing_points init num_tasks manual_seed unique zip GPLayer MultitaskGaussianLikelihood sample pop join int scheduler_const epochs_init_map bbp deepcopy T print load_checkpoint isinstance fit makedirs gp num_workers named_parameters parameters sgld train_sgld_pdts thompson zeros train epochs array train_swag_pdts len children log_cat isinstance inverse_transform tolist extend apply gp_layer eval thompson randint numpy train_sgld MoleculeModel checkpoint_path train_dun stds get_data get_metric_func save_checkpoint numpy evaluate_predictions save device features_size save_dir cuda log build_lr_scheduler swag exp predict_MCdepth train_swag dun tolist len dir Adam ensemble_start_idx samples normalize_features results_dir to sum range predict debug predict_std_gp set_targets MoleculeDataLoader num_tasks manual_seed init info features_scaling train_bbp sample join scheduler_const train_gp bbp ensemble_size evaluate savez print load_checkpoint gp makedirs num_workers nanmean split_data sgld log_noise zeros train epochs array fit features_scaling savez debug fit tolist get_data set_targets split_data get_metric_func num_tasks normalize_features zeros features_size array len model zero_grad loss_func device compute_gnorm log exp ones train_data_size shape compute_pnorm to samples_dun sum range log_cat debug samples_bbp join backward pdts get_last_lr log_noise Tensor step children rho_min_bbp save_checkpoint device cuda log MoleculeModelBBP rho_max_bbp Adam to range init_rho epochs_bbp MoleculeDataLoader zip deepcopy T scheduler_const join isinstance evaluate print load_checkpoint named_parameters nanmean parameters train children NoamLR epochs_dun rho_min_dun save_checkpoint device cuda create_log_cat log exp Adam to sum range log_cat init_rho MoleculeDataLoader zip rho_max_dun deepcopy T scheduler_const join MoleculeModelDUN isinstance evaluate print load_checkpoint named_parameters nanmean parameters train NoamLR gp_layer save_checkpoint cuda log Adam epochs_gp range DKLMoleculeModel VariationalELBO MoleculeDataLoader initial_inducing_points num_tasks GPLayer MultitaskGaussianLikelihood deepcopy scheduler_const join evaluate print load_checkpoint nanmean train len join evaluate print param_groups log nanmean MoleculeDataLoader save_checkpoint OneCycleLR samples SGLD train range mix_epochs len join int print param_groups len named_parameters save_checkpoint OneCycleLR samples SGLD train range mix_epochs makedirs cov_mat join max_num_models evaluate param_groups print SGD epochs_swag nanmean MoleculeDataLoader SWAG save_checkpoint OneCycleLR collect_model train range log len cov_mat scheduler_const join max_num_models param_groups print SGD named_parameters epochs_swag SWAG save_checkpoint OneCycleLR collect_model train range len | # Bayesian Molecular Property Prediction This repo is a fork of [chemprop](https://github.com/chemprop/chemprop). We apply a set of Bayesian methods to the chemprop directed message passing neural network (D-MPNN). We assess predictive accuracy, calibration and performance on a downstream molecular search task. Our research is described in the paper [Bayesian Graph Neural Networks for Molecular Property Prediction](https://arxiv.org/abs/2012.02089), presented at the NeurIPS 2020 ml4molecules workshop. ## Methods The code contains implementations of eight methods, abbreviated as follows: * **MAP**: classical *maximum a posteriori* training; we find the regularised maximum likelihood solution. * **GP**: the final layer of the readout FFN is replaced with a GPyTorch stochastic variational GP (https://docs.gpytorch.ai/en/v1.2.0/examples/04_Variational_and_Approximate_GPs/SVGP_Regression_CUDA.html). We train the resulting model end-to-end (deep kernel learning). * **DropR**: MC dropout across readout FFN layers (https://arxiv.org/abs/1506.02142). * **DropA**: MC dropout over the full D-MPNN. * **SWAG**: Stochastic Weight Averaging - Gaussian (https://arxiv.org/abs/1902.02476). | 2,167 |
gerazov/prosodeep | ['speech synthesis'] | ['A Variational Prosody Model for the decomposition and synthesis of speech prosody'] | prosodeep/prosodeep_dsp.py prosodeep.py prosodeep/prosodeep_params.py docs/conf.py website/publishconf.py docs/my_rtd_theme/__init__.py prosodeep/prosodeep_learn.py prosodeep/prosodeep_corpus.py prosodeep/prosodeep_data.py prosodeep/prosodeep_eval.py website/fabfile.py website/pelicanconf.py prosodeep/prosodeep_plot.py prosodeep/__init__.py prosodeep/prosodeep_models.py setup get_html_theme_path create_masks process_tone_scope reformat_corpus_static contour_scope_count downcast_corpus scale_and_expand_corpus split_corpus get_strength append_contour_to_corpus build_corpus remove_phrase_types add_context add_contour_generator_count generate_ramps get_marks reformat_corpus_rnn generate_input_ramps corpus_to_fpro extract_dur_stats read_textgrid extract_syll_stats extract_f0_stats normalise_min_max get_energy wcorr frame_up f0_smooth wrmse eval_performance get_mask eval_sum train_model analysis_by_synthesis synthesise_rnn_testset train_rnn_model synthesise_deep_testset synthesise_rnn_contours synthesise_deep_contours synthesise_anbysyn_testset construct_contour_generator deep_rnn_baseline_model RNNGraph loss_mmd ContourGeneratorVRNN loss_mse loss_kld ContourGeneratorRNN loss_mmd_rnn deep_rnn_model PyTorchRegressor ContourGeneratorMLP deep_baseline_model ContourGeneratorVar StaticGraph deep_model ContourGeneratorStrengthMLP create_parser Params plot_contours plot_expansion plot_worst plot_batch_losses plot_final_losses plot_losses init_colors draw_sigma plot_eval plot_histograms gh_pages rebuild serve preview build clean reserve cf_upload regenerate publish dirname abspath add_html_theme abspath dirname process_tone_scope arange getLogger strip warning get_marks f0_ref DataFrame phrase_types columns end_marks isochrony_clock read_textgrid tolist natsorted get_strength append tone_levels range format use_ipcgs function_types size datafolder unique info tone_scope file_type enumerate re_folder T error Series isnan append_contour_to_corpus match any zeros array re_vowels values natsorted groups notnull unique max range compile to_numeric getLogger index apply info getLogger concat copy target_columns use_only_last_iteration iterations f0_scale dur_scale model_type info range orig_columns getLogger debug Series append generate_input_ramps arange reset_index getLogger unique zip zeros values remove append zip getLogger insert n_unit tolist contourtype index phrasetype info get_loc context_columns range values len int getLogger insert tolist index info get_loc range max contour_count_columns len reset_index unique size float split get_marks array sorted arange __next__ GroupShuffleSplit StratifiedShuffleSplit tolist index unique append isin array len getLogger DataFrame max list iterrows tolist groups n_context sort_values range format astype nan info fill empty orig_columns vowel_pts int items enumerate index zeros context_columns contour_count_columns len getLogger n_unit max values list groups n_context sort_values range format size astype nan unique info fill marks zip empty orig_columns vowel_pts int items enumerate contourtype index isnan zeros context_columns contour_count_columns len append arange getLogger duration interp1d dur_stats syll_stats f0_max linspace re_f0 log f0_min extract_syll_stats len natsorted f0_folder disol input orthographs_level f0_smooth f0_method tone_levels append plot_histograms interfunc range format use_ipcgs save_path size datafolder end_time nan info fill empty tiers extract_f0_stats isochrony_gravity enumerate start_time re_folder syll_level int show_plot text vowel_marks phrase_level isclose f0_stats extract_dur_stats zeros array re_vowels diff getLogger interp1d get_data f0_max linspace f0_min speaker read_textgrid f0_folder f0_method interfunc database format f0_ref_method size close datafolder set mean info distplot int print text vowel_marks match figure median array re_vowels diff value_counts getLogger duration get_data DataFrame speaker read_textgrid apply database format close datafolder set mean info distplot to_numeric print figure median value_counts getLogger duration get_data warning DataFrame speaker read_textgrid apply database format close datafolder set mean end_time info distplot syll_level to_numeric print figure median isclose arange grid interp1d pitch_fs lowpass_fg lowpass_order ylabel use_median savefig interfunc format plot save_path medfilt close filtfilt firwin use_lowpass median_order xlabel figure interfunc_back sqrt size sum isnan sqrt mean any sum arange size transpose pad tile matrix max abs frame_up sum min feat_min feat_max max values getLogger duration f0_scale f0_ref DataFrame max log values wrmse exp plot_eval_n_files read_textgrid target_columns wcorr append eval_columns f0_smooth range format save_path size datafolder end_time unique info empty plot_eval enumerate vowel_pts eval_voiced_offset start_time show_plot print text Series vowel_marks zeros ravel array re_vowels len format getLogger get_mask Series eval_sum_columns info append DataFrame validation_size getLogger mse_ iterations values target_columns sum best_epoch_ predict strength_method range asarray format use_strength astype copy info use_validation orig_columns items print split_corpus max_iter_ context_columns fit getLogger batch_size reg_strengths activation optimization hidden_units reg_strengths_mean max_iter adjust_max_iter n_context patience use_strength learn_rate info vowel_pts use_cuda n_hidden_context l2 PyTorchRegressor strength_method early_thresh validation_size __next__ batch_size getLogger reg_strengths activation corpus_name normalisation_type iterations hidden_units reg_strengths_mean optimization T_max vae_input n_latent values phrase_types early_stopping GroupShuffleSplit step tolist target_columns model_type array n_context load_state_dict range predict state_dict asarray inf format function_types patience reset_optimizer use_strength astype mean learn_rate info zip pkl_path drop_rate orig_columns vowel_pts use_validation remove enumerate use_cuda n_hidden_context reformat_corpus_static l2 reshape print vae reg_vae deep_baseline_model cpu strength_method early_thresh deep_model fit validation_size batch_size getLogger activation corpus_name normalisation_type iterations hidden_units optimization warning use_test_set n_latent phrase_types early_stopping vae_as_input_hidd tolist target_columns ShuffleSplit model_type array n_context load_state_dict n_hidden_rnn next range rnn_model predict state_dict deep_rnn_baseline_model asarray inf format function_types patience syntax_functions rnn_inputs mean vae_as_input learn_rate unique info pkl_path orig_columns vowel_pts use_validation enumerate use_cuda l2 reshape fit deep_rnn_model isnan vae reg_vae morpheme_functions split cpu reformat_corpus_rnn early_thresh len unsqueeze_ format getLogger print size astype use_strength index vae contour_generator info cpu ravel cat unsqueeze_ squeeze_ format zip getLogger print astype vae marks info cpu numpy module values enumerate getLogger corpus_name normalisation_type cuda values tolist target_columns model_type predict replace astype set info zip pkl_path enumerate remove reformat_corpus_static reshape vae cpu deep_model reformat_corpus_rnn replace getLogger reshape deep_rnn_model target_columns corpus_name vae model_type normalisation_type unique info cpu pkl_path cuda predict enumerate format getLogger print values astype unique info sum max range predict view exp exp_ mean repeat any masked_select sum unsqueeze_ repeat randn_like masked_select randn_like add_argument phrase_types str mod use function_types enumerate int format getLogger xlabel axvline close ylabel tight_layout set get_data title savefig figure legend info distplot arange getLogger set_yticklabels grid add_subplot axis iterations f0_scale dur_scale axhline max values use twiny set_xlabel target_columns nansum savefig legend append database update format reset_index plot set_xticklabels function_types size set_xlim use_strength tight_layout close get_ylim nan get_xlim zip fill empty processed_corpus_name orig_columns vowel_pts int error text set_yticks set_xticks set_ylabel figure any ravel yscale format use plot xlabel grid close ylabel savefig figure legend yscale format use arange plot xlabel size grid axis ylabel close savefig figure linspace legend max arange set_yticklabels grid add_subplot left_max ravel f0_scale axhline unsqueeze tensor tones phrase_types list right_max use end_marks set_xlabel axvline model_type title savefig contour_generator n_context append predict cat format plot set_xticklabels phrase_max use_strength close new_zeros empty enumerate vowel_pts unsqueeze_ items text set_yticks vae set_ylabel set_xticks figure cpu generate_ramps numpy len max_iter format use applymap save_path xlabel close ylabel set iterations title savefig figure DataFrame heatmap iterations to_excel rename save max values copyfile sort_values sum range format ExcelWriter save_path mean mkdir unique orig_columns enumerate int Series index array fill_betweenx grid axis f0_max log f0_min use set_edgecolor ylabel title savefig legend set_label format plot close get_frame vlines xlabel set_facecolor figure Ellipse add_artist rmtree isdir makedirs local local local format chdir write AddressReuseTCPServer serve_forever deploy_path serve build local rebuild local rsync_project local rebuild format | # ProsoDeep **Deep understanding and modelling of the hierarchical structure of Prosody** The [ProsoDeep project](https://gerazov.github.io/prosodeep/) seeks to gain a deeper understanding of the hierarchical structure of the language of prosody through the utilisation of deep models. The models are designed to facilitate the advanced exploration of prosodic phenomena in lingustics, as well as advancements in speech technologies that rely both on the synthesis of prosody, e.g. expressive text-to-speech (TTS) systems, and its analysis, e.g. automatic speech recognition (ASR) and speech emotion recognition (SER). # The models The different models developed within the ProsoDeep project are based on the [Superposition of Functional Contours (SFC) model](https://gerazov.github.io/prosodeep/project#sfc), which is a top-down approach that aims to decompose prosodic contours into their constituent functionally relevant elementary contours, named also *prosodic prototypes* or *clichés* \[[sfc](#References)\]. They include the: - [PySFC](https://gerazov.github.io/prosodeep/pysfc) model -- a [Python](https://www.python.org/) implementation of the original SFC model \[[pysfc](#References)\], - [Weighted SFC (WSFC)](https://gerazov.github.io/prosodeep/wsfc) model -- that incorporates the modelling of prominence of the extracted prosodic prototypes \[[wsfc](#References)\], - [Variational Prosody Model (VPM)](https://gerazov.github.io/prosodeep/vpm) -- that models the linguistic conext specific variability of the prosodic prototypes \[[vpm](#References)\], and | 2,168 |
germanjke/StyleTransformerGANs | ['style transfer'] | ['Multi-style Generative Network for Real-time Transfer'] | help_functions.py network.py main.py tensor_load_rgbimage preprocess_batch tensor_save_rgbimage tensor_save_bgrimage transform int ANTIALIAS transpose convert resize float fromarray numpy astype save chunk tensor_save_rgbimage cat transpose chunk cat collect Variable style_model preprocess_batch setTarget unsqueeze empty_cache tensor_save_bgrimage | # StyleTransformerGANs ## Final project to Deep Learning School. Style Trasformer via telegram bot. ## You can check how this works in demo notebooks ### About Bot's name: **@styletransferjune2020bot**, try it! UPD: It's over cause my AWS free month is over :) When you want to impose some kind of style on an image, you can use just such a bot. The bottom line is simple: upload an image with content, upload a style, the bot itself creates a connected image. To better understand what is at stake, take a look at a few examples:  | 2,169 |
getcontrol/KYC-tensorflow | ['face detection'] | ['MIDV-500: A Dataset for Identity Documents Analysis and Recognition on Mobile Devices in Video Stream'] | model/dsnt.py app.py upload_file _load_graph upload_image_file init _make_gaussian _js_2d _make_gaussians dsnt _softmax2d _kl_2d _normalise_heatmap js_reg_loss imwrite RANSAC tuple fromstring COLOR_RGB2BGR imdecode save resize _load_graph Session run getvalue rotate shape COLORMAP_JET imread astype warpPerspective BytesIO line dsnt print applyColorMap findHomography array get_tensor_by_name cvtColor multiply transpose float32 reduce_sum _normalise_heatmap stack cast tile _js_2d _make_gaussians format relu reshape normalise _softmax2d sigmoid abs log reduce_sum exp reduce_max reduce_sum reduce_max float32 reduce_sum cast range while_loop reshape | # Machine Learning Driven Identity Verification This repo contains a Jupyter Notebook that utilizes a Tensorflow Model to identity and smartly crop an Identity Card, along with an accompanying Flask app. The model was trained using a specially formatted version of the the MIDV-500 dataset where the images were converted from TIF to JPG and the extraneous metadata in the annotations removed. This model also works for isolating and cropping photos of paper receipts and possibly any conceivable rectangular images. To learn how to train the model see repo [KYC-train-model](https://github.com/getcontrol/KYC-train-model/). # Installation Instructions ### Jupyter Notebook 1. Create and activate a Python 3 Virtual environment. ```python3 -m venv env``` ```source env/bin/activate``` 2. Install Requirements. ```pip install -r requirements.txt``` 3. Start Jupyter Notebook. | 2,170 |
getcontrol/KYC-train-model | ['face detection'] | ['MIDV-500: A Dataset for Identity Documents Analysis and Recognition on Mobile Devices in Video Stream'] | model/dsnt.py utils/preprocessing.py model/mobilenet/mobilenet_v2.py model/feature_extractor.py raw_data/display.py model/mobilenet/mobilenet_v2_test.py raw_data/estimate_rotation.py utils/misc.py train.py synthesis_data.py utils/preprocess_utils.py model/keypoints_heatmaps_model.py utils/__init__.py test.py raw_data/make_annot.py model/mobilenet/conv_blocks.py receipt_dataset.py build_data.py model/mobilenet/mobilenet.py _int64_list_feature image_label_to_tfexample _bytes_list_feature ImageReader _get_files _convert_dataset _parse_function main get_dataset_split extract_receipt gen_synthesis_data_v1 gen_synthesis_data_v2 rotate_point random_synthesis_v2 random_flip_receipt shift_image random_synthesis_rec_with_bgimg random_resize_receipt _load_graph vis_input_data input_pipeline _make_gaussian _js_2d _make_gaussians dsnt _softmax2d _kl_2d _normalise_heatmap js_reg_loss extract_features _preprocess_subtract_imagenet_mean _preprocess_zero_mean_unit_range main_network keypoints_heatmaps_model expand_input_by_factor _v1_compatible_scope_naming _fixed_padding split_separable_conv2d expanded_conv _make_divisible split_conv _split_divisible mobilenet depth_multiplier _scope_all safe_arg_scope _fixed_padding apply_activation op _set_arg_scope_defaults _make_divisible NoOpScope mobilenet_base global_pool training_scope mobilenet mobilenet_base wrapped_partial training_scope MobilenetV2Test find_ops main crop random_merge main estimate_skew_angle click_and_crop get_init_fn_for_scaffold preprocess_image_and_points random_up_down_flip get_bbox_from_points random_left_right_flip preprocess_image_and_edgemap glob join extend set int list ImageReader join _get_files shuffle write mkdir ceil float range flush len _convert_dataset parse_single_example set_shape TFRecordDataset join Glob roll list crop array list size rand astype resize max list size transpose choice array fromarray list extract_receipt size convert new copy paste filter random_flip_receipt polygon randint GaussianBlur empty array random_resize_receipt cos sin list extract_receipt tuple size rand deg2rad rotate mean copy paste filter random_flip_receipt randint GaussianBlur array random_resize_receipt join str glob len write random_synthesis_v2 mkdir save split append range flush open join str glob len convert write mkdir save append random_synthesis_rec_with_bgimg range flush open partial map make_one_shot_iterator get_dataset_split batch input_pipeline line print astype waitKey COLOR_RGB2BGR imshow shape Session cvtColor run multiply transpose float32 reduce_sum _normalise_heatmap stack cast tile _js_2d _make_gaussians format relu reshape normalise _softmax2d sigmoid abs log reduce_sum exp reduce_max reduce_sum reduce_max float32 reduce_sum cast range while_loop reshape reshape cast float32 _preprocess_subtract_imagenet_mean stack extract_features add_n main_network update str get_or_create_global_step get_collection LoggingTensorHook stack image set_shape js_reg_loss UPDATE_OPS Scaffold mean_squared_error expand_dims range scalar pad int max append range identity conv2d zip append _split_divisible enumerate split items list hasattr _make_divisible pop get deepcopy get as_list as_list pool_op convert_to_tensor set_shape xavier_initializer truncated_normal_initializer deepcopy partial update_wrapper get_default_graph glob plot_and_draw join shuffle fromarray asarray new convert shape paste save polygon empty array glob shuffle merge var list percentile_filter zoom min rotate mean shape append resize_image range max clip amax COLOR_BGR2GRAY waitKey rotate imshow estimate_skew_angle imread array cvtColor open line print append circle len trainable_variables format latest_checkpoint print build IsDirectory Saver info reshape float32 random_left_right_flip random_up_down_flip cast get_bbox_from_points reduce_min reduce_mean reduce_max less_equal random_uniform less_equal random_uniform | # Train Tensorflow Model for Tensorflow Verification This repo contains the Python code to produce the dataset and train the Tensorflow model for use in [KYC-tensorflow](https://github.com/getcontrol/KYC-tensorflow). This training model transfers learning from a previously trained model mobilenet_v2. # Installation Instructions 1. Clone the repo. ``` git clone https://github.com/getcontrol/KYC-train-model ``` ``` cd KYC-train-model``` 2. Download and unzip the the MIDV-500 formatted dataset to 'verification-train-model' directory. https://www.dropbox.com/s/dmjbat0e1re5rkf/midv_500.zip?dl=0 3. Make a 'data' directory in verification-train-model for the dataset generation and an 'output' folder for testing. ```mkdir data``` | 2,171 |
gfxdisp/asap | ['video quality assessment'] | ['Active Sampling for Pairwise Comparisons via Approximate Message Passing and Information Gain Maximization'] | python/asap_cpu.py python/asap_gpu.py TrueSkillSolver ASAP compute_minimum_spanning_tree prob_cmps ASAP Psi get_maximum Phi true_skill kl_divergence_approx tensor Normal Psi n2m size m2n sqrt Psi device to Phi range meshgrid Normal fill_diagonal_ sorted asarray inf minimum_spanning_tree edges from_numpy_matrix array Normal sum expand_dims stack where T compute_minimum_spanning_tree unbind prob_cmps shape stack coo_matrix unsqueeze numpy get_maximum to range cuda true_skill len | gfxdisp/asap | 2,172 |
ggeorgak11/mapping_navigation | ['semantic segmentation'] | ['Simultaneous Mapping and Target Driven Navigation'] | train_MapNet.py test_NavNet.py helper.py IL_Net.py train_NavNet.py data_helper.py dataloader.py test_MapNet.py baseline_rotate_and_see.py mapNet.py visualize_episodes.py baseline_random_walker.py parameters.py baseline_utils.py read_cached_data ActiveVisionDatasetEnv minus_theta_fn read_all_poses project_pixels_to_world_coords cameraPose2currentPose readDepthImage AVD AVD_IL AVD_online create_scene_graph load_depth_file get_scene_target_graphs get_state_action_cost absolute_poses getCamera get_im_pose load_scene_info getImageData load_detections get_image_poses get_sseg convert_image_by_pixformat_normalize discretize_coords invert_pose generate_detection_image build_p_gt read_label_map depth_to_3D get_det_mask candidate_targets relative_poses save_params plot_loss save_model load_model ILNet Encoder MapNet Parameters_IL Parameters ParametersMapNet evaluate_MapNet undo_discretization get_pose evaluate_NavNet softmax prepare_mapNet_input get_minibatch get_minibatch run_mapNet unroll_policy select_minibatch plot_step_loc get_images visualize_nav visualize_loc get_pcloud plot_step_nav minus_theta_fn range atan2 format asarray astype float32 stack Reader resize append asDirect ones dot append empty array range len join loadmat range join item load join hstack loadmat open dot inv zeros zeros mod discretize_coords pi floor orientations zeros range reduce astype where intersect1d floor zeros zeros reshape where float transpose astype array open add_edge DiGraph range add_node len decode list create_scene_graph load_scene_info keys append list keys zeros asarray range len append shortest_path len resize convert_image_by_pixformat_normalize depth_to_3D load_depth_file resize getCamera imread isfile readlines len split append float range open int readlines close open split zeros range int read_label_map det_dir_path label_map_path item label_index_path generate_detection_image __getattribute__ dir str list asarray plot xlabel reshape axis ylabel mean title clf savefig range makedirs print str save load str print eval train cuda view where orientations numpy max cell_size print exp asarray append unsqueeze print batch_size dets_nClasses seq_len append zeros range cuda exp EPS_START random EPS_DECAY EPS_END build_p_gt mapNet clone print zeros read shape zeros imread range len set_aspect show str plot axis imshow scatter savefig set_aspect show str plot axis scatter savefig str load_scene_info asarray get_images get_image_poses subplots set_title len get_pcloud scatter clf item range plot_step_nav makedirs str load_scene_info asarray get_images subplots absolute_poses len get_pcloud scatter clf range plot_step_loc makedirs | ## Simultaneous Mapping and Target Driven Navigation [[pdf]](https://arxiv.org/pdf/1911.07980.pdf) G. Georgakis, Y. Li, J. Kosecka, arXiv 2019 This is Python code for training and testing both mapnet and navigation components. ### Requirements - Python 3.7 - Torch 0.4.1 #### Python Packages ``` networkx torchvision 0.2.1 | 2,173 |
gggcy/AIC2020_ReID | ['vehicle re identification'] | ['Vehicle Re-Identification Based on Complementary Features'] | mgn_model/reid/utils/logging.py mgn_model/reid/utils_p1/data/dataset.py global_model/reid/__init__.py mgn_model/reid/utils_p1/data/attribute_dataset.py mgn_model/reid/__init__.py global_model/reid/feature_extraction/database.py global_model/reid/datasets/aicity_attribute_simulation.py global_model/reid/metric_learning/__init__.py mgn_model/reid/loss/triplet.py global_model/reid/models/multi_attribute_2_152_s.py post_processing/gallery_feature.py global_model/reid/models/senet154.py global_model/reid/utils/logging.py mgn_model/reid/datasets/complete_aicity_car.py global_model/reid/utils/data/attribute_dataset.py global_model/reid/utils/data/sampler.py mgn_model/reid/metric_learning/kissme.py global_model/reid/loss/dualmatch.py mgn_model/reid/utils/serialization.py mgn_model/reid/lr_scheduler.py global_model/reid/models/cross_entropy_trihard.py global_model/reid/loss/crossentropylabelsmooth.py mgn_model/reid/utils/data/transforms.py global_model/reid/datasets/aicity_car196.py global_model/reid/utils/serialization.py mgn_model/reid/feature_extraction/database.py global_model/reid/direct_evaluators.py mgn_model/reid/utils/data/sampler.py global_model/train_with_grafting.py global_model/reid/evaluation_metrics/__init__.py global_model/reid/models/cross_trihard_se_resnet.py global_model/reid/models/densenet.py post_processing/ensemble_distmat.py global_model/reid/extract_fea_from_dir.py global_model/reid/dist_metric.py mgn_model/reid/datasets/new_train.py global_model/reid/datasets/vehicle_downsample.py post_processing/run_rerank_val.py post_processing/compute_dist.py mgn_model/reid/extract_fea_from_val.py mgn_model/reid/trainers.py global_model/reid/extract_fea_from_val_tencrop.py global_model/reid/models/dense_ibn_a.py mgn_model/reid/utils_p1/osutils.py global_model/reid/models/senet.py mgn_model/reid/feature_extraction/cnn.py global_model/reid/utils/grafting.py mgn_model/reid/utils/__init__.py mgn_model/reid/utils/data/attribute_dataset.py global_model/reid/loss/triplet.py post_processing/kmeans/pre_for_kmeans.py post_processing/kmeans/conver_test2train.py global_model/reid/metric_learning/kissme.py mgn_model/reid/extract_fea_from_dir.py mgn_model/reid/utils_p1/data/__init__.py mgn_model/reid/metric_learning/euclidean.py global_model/train.py global_model/reid/models/direction.py global_model/reid/utils/data/preprocessor.py global_model/visualize/visualize.py post_processing/kmeans/kmeans.py global_model/reid/extract_fea_from_val.py mgn_model/reid/feature_extraction/__init__.py mgn_model/reid/models/resnet_mgn.py global_model/reid/evaluation_metrics/classification.py mgn_model/reid/utils_p1/data/attribute_dataset_simulation.py mgn_model/reid/models/__init__.py global_model/reid/models/se_152_ibn.py global_model/reid/utils/data/__init__.py global_model/reid/models/multi_attribute_3.py mgn_model/reid/models/hrnet_48w.py mgn_model/reid/utils/data/preprocessor.py mgn_model/reid/datasets/viper.py global_model/reid/evaluation_metrics/ranking.py global_model/reid/utils/data/transforms.py mgn_model/reid/evaluation_metrics/__init__.py global_model/reid/feature_extraction/rerank.py mgn_model/reid/datasets/gao_crop_train.py mgn_model/reid/datasets/cuhk03.py global_model/reid/loss/center_loss.py global_model/reid/datasets/complete_aicity_car.py global_model/reid/loss/multi_attribute_loss.py global_model/reid/utils/data/dataset.py global_model/reid/extract_attribute.py mgn_model/reid/datasets/aicity_car196.py mgn_model/reid/evaluation_metrics/ranking.py mgn_model/reid/utils_p1/data/preprocessor.py global_model/reid/datasets/aicity_attribute.py global_model/reid/datasets/small_vehicle.py global_model/reid/models/multi_attribute_8.py mgn_model/reid/models/hrnet.py global_model/reid/utils/osutils.py post_processing/query_expansion_eu.py mgn_model/reid/utils_p1/__init__.py mgn_model/reid/datasets/cuhk01.py global_model/reid/direct_trainers.py global_model/reid/models/resnet.py global_model/reid/loss/__init__.py mgn_model/reid/datasets/new_complete_aicity_car.py mgn_model/reid/utils/data/dataset.py mgn_model/reid/utils_p1/data/sampler.py global_model/reid/attribute_trainers.py global_model/reid/datasets/__init__.py post_processing/concat_feature_val.py global_model/reid/loss/npair.py global_model/reid/feature_extraction/cnn.py global_model/reid/loss/arcface.py mgn_model/reid/datasets/market1501.py mgn_model/reid/utils_p1/meters.py global_model/reid/loss/oim.py mgn_model/reid/utils_p1/data/transforms.py mgn_model/reid/utils/osutils.py global_model/reid/models/cross_trihard_senet.py global_model/reid/extract_direction_from_dir.py mgn_model/reid/utils/data/__init__.py global_model/reid/models/multi_attribute_8_152.py mgn_model/reid/datasets/__init__.py mgn_model/reid/datasets/dukemtmc.py global_model/reid/evaluators.py mgn_model/reid/loss/__init__.py mgn_model/train.py global_model/reid/models/hrnet_48w.py mgn_model/reid/loss/xentropy_sac.py post_processing/query_expansion_cos.py mgn_model/reid/datasets/small_vehicle.py mgn_model/reid/evaluators.py global_model/reid/models/inceptionv4.py global_model/reid/models/res2net.py post_processing/generate_result_val.py mgn_model/reid/utils_p1/logging.py mgn_model/reid/dist_metric.py global_model/reid/utils/data/attribute_dataset_simulation.py global_model/reid/feature_extraction/__init__.py global_model/reid/models/ibnblock.py mgn_model/reid/metric_learning/__init__.py mgn_model/reid/loss/mgn_loss.py global_model/reid/trainers.py mgn_model/reid/evaluation_metrics/classification.py mgn_model/reid/utils_p1/serialization.py global_model/reid/utils/meters.py global_model/reid/models/adaptive_avgmax_pool.py global_model/reid/models/__init__.py global_model/reid/metric_learning/euclidean.py global_model/reid/models/dpn.py mgn_model/reid/datasets/aicity_attribute.py global_model/reid/utils/__init__.py global_model/visualize/visualize_val.py mgn_model/reid/utils/meters.py mgn_model/reid/models/resnet_reid.py global_model/reid/models/se_module.py get_data main get_data main outside Multi_Attribute_Trainer Multi_Attribute_Trainer_s Type_Attribute_Trainer BaseTrainer Evaluator Trainer Direct_Trainer BaseTrainer DistanceMetric Evaluator_dismat load_pickle evaluate_all attribute_evaluate_all_s pairwise_distance attribute_pairwise_distance attribute_extract_features_simulation Evaluator_simulation Evaluator Evaluator_pkl pairwise_distance_pkl attribute_evaluate_all attribute_pairwise_distance_simulation attribute_extract_features extract_features extract_features_pkl main extract_features Combine_Net get_real_test_data main extract_features Combine_Net get_real_test_data main extract_features Combine_Net get_real_test_data get_real_test_data Combine_Net main extract_features Ten_crop Arc_Trihard_Trainer Cross_Trihard_Center_Trainer Cross_Trihard_Trainer BaseTrainer Aicity_Attribute Aicity_Attribute_Simulation Aicity_Car196 Complete_Aicity_Car Small_Vehicle Downsample_Vehicle names create get_dataset accuracy mean_ap _unique_sample cmc extract_cnn_feature extract_extra_attrib_feature FeatureDatabase re_ranking AngularPenaltySMLoss ArcMarginProduct CenterLoss CrossEntropyLabelSmooth DualMatchTest MultiPartNPairLoss DualMatch TypeAttributeLoss MultiAttributeLoss_s MultiAttributeLoss NPairLoss NPairAngularLoss BatchHardLoss OIM oim OIMLoss normalize euclidean_dist TripletLoss hard_example_mining Euclidean KISSME validate_cov_matrix get_metric AdaptiveAvgMaxPool2d adaptive_avgmax_pool2d pooling_factor cross_entropy_trihard_resnet50 Bottleneck conv3x3 ResNetBase cross_entropy_trihard_resnet152 cross_entropy_trihard_resnet18 cross_entropy_trihard_resnet34 Cross_Entropy_Trihard_ResNet BasicBlock cross_entropy_trihard_resnet101 Cross_Trihard_Senet cross_trihard_senet101 Cross_Trihard_Seresnet cross_trihard_se_resnet152 densenet161 DenseNet densenet169 densenet201 _DenseLayer _DenseBlock _Transition densenet121 densenet161_ibn_a BNIN DenseNet _DenseLayer densenet169_ibn_a _DenseBlock densenet121_ibn_a _Transition densenet201_ibn_a Bottleneck conv3x3 Direction_ResNet ResNetBase BasicBlock direction_resnet50 dpn68b DPN DualPathBlock InputBlock dpn68 dpn98 CatBnAct BnActConv2d dpn131 dpn107 dpn92 HighResolutionNet_reid48w Bottleneck HighResolutionModule HighResolutionNet conv3x3 BasicBlock SENet SEResNetBottleneck SEBottleneck SEResNeXtBottleneck Bottleneck Bottleneck_ibn SEModule Mixed_4a Mixed_5a Reduction_B Inception_B BasicConv2d Inception_A Reduction_A Mixed_3a Inception_C inceptionv4 InceptionV4 multi_attribute_2_resnet152_s Bottleneck conv3x3 Multi_Attribute_ResNet_3 weights_init_classifier ResNetBase BasicBlock weights_init_kaiming multi_attribute_3_resnet50 Bottleneck conv3x3 Multi_Attribute_ResNet_3 weights_init_classifier ResNetBase BasicBlock weights_init_kaiming Bottleneck multi_attribute_8_resnet50 conv3x3 Multi_Attribute_ResNet_3 weights_init_classifier ResNetBase BasicBlock weights_init_kaiming Bottleneck conv3x3 Multi_Attribute_ResNet_3 weights_init_classifier ResNetBase BasicBlock multi_attribute_8_resnet152 weights_init_kaiming res2net50_v1b_26w_4s Res2Net res2net50_v1b res2net101_v1b Bottle2neck res2net101_v1b_26w_4s res2net152_v1b_26w_4s ResNet resnet50 resnet152 resnet34 resnet18 resnet101 SENet SEResNetBottleneck SEBottleneck SEResNeXtBottleneck Bottleneck Bottleneck_ibn SEModule senet154 SENet SEBottleneck initialize_pretrained_model Bottleneck SEModule Cross_Trihard_Seresnet se_152_ibn SELayer names create filter_l1norm entropy Logger AverageMeter mkdir_if_missing load_checkpoint copy_state_dict read_json save_checkpoint write_json to_numpy to_torch Attribute_Dataset _pluck Attribute_Dataset_Simulation _pluck Dataset _pluck _pluck_query_gallery Flip_Preprocessor Flip_Preprocessor_For_Vis Preprocessor Attribute_Preprocessor_s Direct_Preprocessor Attribute_Preprocessor RandomIdentityAttributeSampler_s RandomIdentitySampler RandomIdentityAttributeSampler RandomIdentityBatchSamplerNew2 RandomErasing ResizeRandomCrop RectScale RandomSizedRectCrop imshow sort_img imshow get_data main DistanceMetric pairwise_distance evaluate_all extract_features Evaluator main extract_features Combine_Net get_real_test_data main extract_features Combine_Net get_real_test_data WarmupMultiStepLR Cross_Trihard_Trainer BaseTrainer BaseTrainer_1 Trainer Trainer_SAC_Triplet Aicity_Attribute Aicity_Car196 Complete_Aicity_Car CUHK01 CUHK03 DukeMTMC Gao_Crop_Train Market1501 New_Complete_Aicity_Car New_Train Small_Vehicle VIPeR names create get_dataset accuracy mean_ap _unique_sample cmc extract_cnn_feature FeatureDatabase MGN_loss normalize euclidean_dist TripletLoss hard_example_mining XentropyLoss_SAC Euclidean KISSME validate_cov_matrix get_metric HighResolutionNet_reid Bottleneck HighResolutionModule HighResolutionNet conv3x3 BasicBlock HighResolutionNet_reid48w Bottleneck HighResolutionModule HighResolutionNet conv3x3 BasicBlock copy_weight_for_branches101 ResNet101_mgn_lr copy_weight_for_branches152 Bottleneck ResNet_mgn_lr conv3x3 copy_weight_for_branches ResNet50_mgn_lr BasicBlock ResNet152_mgn_lr ResNet_reid_50 ResNet_reid_101 Bottleneck conv3x3 ResNetBase ResNet_reid_152 BasicBlock ResNet_reid names create Logger AverageMeter mkdir_if_missing load_checkpoint copy_state_dict read_json save_checkpoint write_json to_numpy to_torch Attribute_Dataset _pluck Dataset _pluck _pluck_query_gallery Preprocessor Attribute_Preprocessor RandomIdentitySampler RandomIdentityAttributeSampler RandomIdentityBatchSamplerNew2 RandomErasing ResizeRandomCrop RectScale RandomSizedRectCrop Logger AverageMeter mkdir_if_missing load_checkpoint copy_state_dict read_json save_checkpoint write_json to_numpy to_torch Attribute_Dataset _pluck Attribute_Dataset_Simulation _pluck Dataset _pluck _pluck_query_gallery Flip_Preprocessor Preprocessor Attribute_Preprocessor_s Direct_Preprocessor Attribute_Preprocessor RandomIdentityAttributeSampler_s RandomIdentitySampler RandomIdentityAttributeSampler RandomIdentityBatchSamplerNew2 RandomErasing CenterCrop RandomSizedRectCrop ResizeRandomCrop RectScale extract_features vehicle_pairwise_distance load_features to_numpy extract_features vehicle_pairwise_distance extract_features vehicle_pairwise_distance re_ranking _unique_sample vehicle_pairwise_distance to_numpy extract_features join create list gallery Compose set query DataLoader Preprocessor Normalize workers big_height batch_size SGD get_data query save_checkpoint Logger arch dataset cuda seed big_width create hasattr data_dir logs_dir map Adam load_state_dict append module range format target_width Evaluator set combine_trainval resume manual_seed target_height join gallery Cross_Trihard_Trainer evaluate gpus print load_checkpoint weights named_parameters parameters num_instances adjust_lr train epochs split items arctan print pi OrderedDict load_state_dict startswith sleep cpu float round enumerate list i nums state_dict update group copy keys compile outside makedirs match update extract_cnn_feature time format val print AverageMeter OrderedDict eval avg zip enumerate len update extract_cnn_feature time format val print AverageMeter OrderedDict eval avg zip enumerate len update extract_cnn_feature time format val print AverageMeter OrderedDict eval avg zip enumerate len list view mm expand t cat transform sum addmm_ values len list view mm expand t cat transform sum addmm_ values len list view mm expand t cat transform sum addmm_ values len print format mean_ap print format mean_ap print format mean_ap load items list Variable from_numpy OrderedDict open list view mm expand t cat transform sum addmm_ values len load close open numpy Flip_Preprocessor DataLoader Compose Normalize DataParallel Combine_Net open extract_features dump close listdir get_real_test_data gallery_dir query_dir g_file q_file Ten_crop warn topk size t eq mul_ expand_as append sum max zeros list items choice asarray arange defaultdict astype argsort shape _unique_sample int32 zip append to_numpy range zeros enumerate len asarray arange astype average_precision_score argsort shape int32 append to_numpy range to_torch remove model Variable register_forward_hook OrderedDict eval append remove model Variable register_forward_hook OrderedDict eval cpu append minimum exp zeros_like print transpose astype float16 mean int32 unique append zeros sum max range len expand_as t sqrt addmm_ expand data ne view Variable size min squeeze expand t eq gather max T cholesky eye print max_pool2d avg_pool2d cat Cross_Entropy_Trihard_ResNet Cross_Entropy_Trihard_ResNet Cross_Entropy_Trihard_ResNet Cross_Entropy_Trihard_ResNet Cross_Entropy_Trihard_ResNet Cross_Trihard_Senet print Cross_Trihard_Seresnet list DenseNet group load_url match load_state_dict keys compile list DenseNet group load_url match load_state_dict keys compile list DenseNet group load_url match load_state_dict keys compile load list DenseNet group match load_state_dict keys compile load_url load_state_dict DenseNet load_url load_state_dict DenseNet load_url load_state_dict DenseNet load_url load_state_dict DenseNet Direction_ResNet load_state_dict_from_url DPN load_state_dict load_state_dict_from_url DPN load_state_dict load_state_dict_from_url DPN load_state_dict load_state_dict_from_url DPN load_state_dict load_state_dict_from_url DPN load_state_dict init_weights DPN HighResolutionNet init_weights load load_state_dict InceptionV4 affine print bias kaiming_normal_ weight __name__ constant_ print bias normal_ weight __name__ constant_ print Multi_Attribute_ResNet_3 print Multi_Attribute_ResNet_3 print Multi_Attribute_ResNet_3 print Multi_Attribute_ResNet_3 load_url Res2Net load_state_dict load Res2Net load_state_dict load_url Res2Net load_state_dict load_url Res2Net load_state_dict load_url Res2Net load_state_dict load load_state_dict initialize_pretrained_model SENet print Cross_Trihard_Seresnet reshape min sum max range len reshape norm makedirs dirname mkdir_if_missing join copy dirname save mkdir_if_missing load format partial print Unpickler isfile data items list isinstance print set copy_ add keys state_dict is_tensor list map split append enumerate list map split append enumerate title imread pause view argsort intersect1d numpy argwhere in1d cpu mm append RandomIdentitySampler Trainer Trainer_SAC_Triplet step_epoch WarmupMultiStepLR eval frozen_sublayer step HighResolutionNet init_weights load join list save keys range state_dict load join list save keys range state_dict load join list save keys range state_dict load copy_weight_for_branches load_state_dict ResNet_mgn_lr load load_state_dict copy_weight_for_branches101 ResNet_mgn_lr load copy_weight_for_branches152 load_state_dict ResNet_mgn_lr ResNet_reid ResNet_reid ResNet_reid load close open cosine range cat load close open view expand t addmm_ concatenate float32 | # Implementation of Vehicle Re-Identification Based on Complementary Features for 2020 AICity Challenge Track2 This repository contains the source codes of vehicle Re-ID of our implementation for 2020 AICity Challenge, and we got 5-th place in the vehicle Re-ID track of AIC2020. [Our paper](https://arxiv.org/abs/2005.04463) ###### ## Dependencies python 2.7 / python 3.6 pytorch 1.0 + torchvision 0.2.1 + metric_learn 0.5.0 + cv2 3.0 + refer to our source codes for other dependencies | 2,174 |
ghzhang233/Leakage-Neutral-Learning-for-QuoraQP | ['selection bias'] | ['Selection Bias Explorations and Debias Methods for Natural Language Sentence Matching Datasets'] | quantify/propensity.py quantify/leaky_predict.py debias/main.py debias/make_data.py debias/utils.py make_model_f make_model_adn make_model_g set_trainability make_model_k text_cleaning get_logger ResultRecorder DataGenerator encode_data read_data extract_unlexicalized train_and_evaluate extract_network_based extract_leakage get_model extract_deepwalk run calculate_weight_fraction leaky_extracting layers lstm_layer concatenate Embedding embedding_layer lstm_cell Model summary info Input range compile info Model summary range compile info Model summary range compile info set_trainability compile model_f Model summary model_k Input model_g join lower sub split stdout getLogger addHandler StreamHandler setLevel FileHandler transform LabelEncoder append fit_transform fit apply coo_matrix sum max values print values len concatenate print system zip append zeros sum max values preferential_attachment add_edge list concatenate print Graph jaccard_coefficient adamic_adar_index add_nodes_from resource_allocation_index zip max range values add_node load encode_data permutation arange extract_unlexicalized extract_deepwalk dump print len extract_network_based apply extract_leakage read_csv fillna open Model Input range compile to_categorical ReduceLROnPlateau accuracy_score argmax max values ones len predict concatenate mean load_weights scale RandomForestClassifier load print EarlyStopping ModelCheckpoint get_model array fit train_and_evaluate read_data to_csv apply coo_matrix sum max values | # Selection Bias Explorations and Debias Methods for Natural Language Sentence Matching Datasets This is the code in [Selection Bias Explorations and Debias Methods for Natural Language Sentence Matching Datasets](<https://arxiv.org/abs/1905.06221>) which has been accepted by ACL 2019. ## Folders *<u>quantify</u>* contains codes for generating weights and codes for *Section 2.1 Quantifying the Biasedness in Datasets* in which we explore the severity of the leakage in six NLSM datasest. *<u>debias</u>* contains codes for *Section 5 Experimental Results for the Leakage-neutral Method on QuoraQP* where we apply our leakage-neutral learning in QuoraQP with a classical Siamese-LSTM model. **Usage and requirements are stated inside folders.** ## Datasets We use following six datasets in our paper: - [QuoraQP](<https://drive.google.com/file/d/0B0PlTAo--BnaQWlsZl9FZ3l1c28/view>) - [MSRP](<https://www.microsoft.com/en-us/download/details.aspx?id=52398>) | 2,175 |
giallo41/taxi-demand | ['traffic prediction'] | ['Revisiting Spatial-Temporal Similarity: A Deep Learning Framework for Traffic Prediction'] | source/metric.py source/NYC_base_minmax_lr_0_01.py source/utils.py mape_trs plot_img mae mae_trs inverse_logscale logscale chk_1ch_output chk_event_metric_3by3 mape rmse rmse_trs maa_trs SGDLearningRateTracker make_temporal_model invlog_rmse_t2 mape_trs invlog_mae_t2 event_metric load_np_data invlog_mape_t1 inv_minmax_rmse_tr10 rmse_trs history_save invlog_mape make_test_ouput invlog_rmse invlog_rmse_t1 make_test_ouput_norm mape maxscale model_metric make_one_hot_data invlog_rmse_tr10 mae_t1 logscale inverse_maxscale make_maxscale_data inv_minmax_mape_tr10 make_test_2ch_ouput mae_t2 inverse_logscale make_binary_one_hot rmse hotspot_metric invlog_mape_tr10 maa_trs exp astype average sqrt square mean abs mean abs average sqrt square average divide mean abs list mape_trs arange print mae astype mae_trs rmse_trs mape maa_trs rmse append expand_dims range read_csv values drop list asarray print mae astype mape rmse read_csv values drop minimum subplot show concatenate reshape astype imshow set_visible figure max log shape Model Input concatenate load zeros print shape astype inverse_logscale int ones append zeros max range ones zeros range append exp cast greater exp cast greater cast greater cast greater exp exp subtract subtract exp exp exp exp list asarray print mape rmse list asarray print mape rmse list mape_trs arange print event_metric hotspot_metric mape maa_trs rmse append range values drop print DataFrame to_csv str print reshape astype inverse_logscale to_csv shape append DataFrame range predict str values print reshape astype inverse_logscale to_csv shape mape rmse append DataFrame range predict str mape_trs print reshape astype inverse_logscale to_csv rmse_trs shape append DataFrame range predict | ## Seoul City taxi request forecast ------------ <b>[ Objective ]</b> - We want predict the future taxi-demand within 30-min of all locations. <br> <b>[ Summary ]</b> - TGNet(temporal guided network) applied Seoul City & NYC taxi datasets. - It outperforms previous models. <br> <b>[ Intro ]</b> | 2,176 |
giannisnik/mpad | ['text classification'] | ['Message Passing Attention Networks for Document Understanding'] | hierarchical_mpad/layers.py mpad/main.py mpad/utils.py hierarchical_mpad/main.py hierarchical_mpad/utils.py mpad/models.py mpad/layers.py hierarchical_mpad/mlp.py mpad/mlp.py hierarchical_mpad/models.py MessagePassing Attention train test MLP MPAD get_vocab preprocessing create_gows sparse_mx_to_torch_sparse_tensor AverageMeter accuracy load_embeddings generate_batches clean_str normalize load_file MessagePassing Attention train test MLP MPAD get_vocab preprocessing create_gows sparse_mx_to_torch_sparse_tensor AverageMeter accuracy load_embeddings generate_batches clean_str normalize load_file backward model zero_grad step cross_entropy cross_entropy model print add set uniform load_word2vec_format zeros sub append list clean_str split dict print len list normalize csr_matrix transpose dict append zeros range enumerate len diags flatten dot sum array sum type_as double data Size astype float32 from_numpy shape long list permutation tocsr lil_matrix append LongTensor ones diag sparse_mx_to_torch_sparse_tensor min ceil zeros array range enumerate len | ## Message Passing Attention Networks for Document Understanding Code for the paper [Message Passing Attention Networks for Document Understanding](https://ojs.aaai.org/index.php/AAAI/article/view/6376/6232). ### Requirements Code is written in Python 3.6 and requires: * PyTorch 1.1 * gensim 3.8 * scikit-learn 0.21 ### Word embeddings Download and unzip the pre-trained word2vec vectors from the following link: https://code.google.com/p/word2vec/ ### Run the model | 2,177 |
giaylenia/OCTA_segm_study | ['semantic segmentation'] | ['Automated Segmentation of Optical Coherence Tomography Angiography Images: Benchmark Data and Clinically Relevant Metrics'] | CNN/CNN_model/mygenerator.py DataGenerator | # Automated OCTA segmentation This repository contains the code that has been used in the study described [here](http://arxiv.org/abs/1912.09978). We proposed for the first time an extensive comparison of blood vessel segmentation methods in Optical Coherence Tomography Angiography (OCTA). Each folder represents a vessel enhancement method (Frangi, Gabor, SCIRD-TS, OOF, CNN, and UNet), plus measuremnets to compute quality of teh segmentation (CAL, TopS). CS-Net implementation can be found [here](https://github.com/suyanzhou626/CSNet) Folders contain README.md when necessary or comments to guide you through the code. ## Data [Link](https://doi.org/10.7488/ds/2729) ## Programming languages The code contains scripts in matlab and python (jupyter notebook). | 2,178 |
giellalt/lang-mdf | ['unity'] | ['Open-Source Morphology for Endangered Mordvinic Languages'] | devtools/ocred-txt2xml.py main analyse_line indent len communicate endswith fromstring exists analyse_line Popen str SubElement getcwd getroot walk asarray replace set mkdir ElementTree indent join print write rmtree split communicate print Popen | The Moksha morphology and tools ========================================== [](https://github.com/giellalt/lang-mdf/issues) [](https://github.com/giellalt/lang-mdf/actions) [](https://github.com/giellalt/lang-mdf/blob/main/LICENSE) [](https://pahkat.uit.no/main/download/speller-mdf?platform=desktop&channel=nightly) [](https://pahkat.uit.no/main/download/speller-mdf?platform=mbile&channel=nightly) This repository contains finite state source files for the Moksha language, for building morphological analysers, proofing tools and dictionaries. The data and implementation are licenced under GNU GPL | 2,179 |
giellalt/lang-myv | ['unity'] | ['Open-Source Morphology for Endangered Mordvinic Languages'] | devtools/analyze_myv.py main decode communicate endswith strip Popen open str getcwd exit getroot walk format parse close mkdir join print text write split findall len | The Erzya morphology and tools ========================================== [](https://github.com/giellalt/lang-myv/issues) [](https://github.com/giellalt/lang-myv/actions) [](https://github.com/giellalt/lang-myv/blob/main/LICENSE) [](https://pahkat.uit.no/main/download/speller-myv?platform=desktop&channel=nightly) [](https://pahkat.uit.no/main/download/speller-myv?platform=mbile&channel=nightly) This repository contains finite state source files for the Erzya language, for building morphological analysers, proofing tools and dictionaries. The data and implementation are licenced under GNU GPL | 2,180 |
ginta-re/Varseta | ['language acquisition'] | ['Language-independent exploration of repetition and variation in longitudinal child-directed speech: A tool and resources'] | evaluation.py utterances.py main.py variations.py Utterances | # Varseta A python script for extracting Variation sets from child directed speech (CDS) data. # Data format Files should be prepared in a simple .txt format. 4 tab-separated columns. Child directed speech data example: ``` 193.99 194.71 0.720 Goddag goddag 196.88 197.8 0.920 Goddag goddag 199.97 200.786 0.816 Säger Siffun 201.327 203.17 1.843 Goddag goddag säger Siffun goddag goddag 204.36 205.24 0.880 #LL ! | 2,181 |
gioramponi/GAN_Time_Series | ['time series', 'data augmentation'] | ['T-CGAN: Conditional Generative Adversarial Network for Data Augmentation in Noisy Time Series with Irregular Sampling'] | orchestrator.py experiments/routine_ecg.py gan_class.py sample_data_class.py classifier_class.py experiments/routine_star.py main.py experiments/routine_earth.py classifier GAN sample_curves main routine sample_data sample_curves main routine sample_curves main routine sample_curves main routine sorted list arange sample range append generative create_placeholders Saver save discriminative run shape append range normal get_optimizers GAN close float time print get_losses global_variables_initializer len seed int str read asarray routine write dumps loads reset_default_graph InteractiveSession str format concatenate print float create_clas classifier train | # GAN_Time_Series The model is a Conditional Generative Adversarial Network for time series with not regular time intervals. The model is created to generate a new time series given a training set of them. ## Why generating data? The main idea is to use this model to augment the unbalanced dataset of time series, in order to increase the precision of a classifier. ## HOW TO USE THE MODEL - Requirements: - python 3 - tensorflow, numpy - Download the repository | 2,182 |
githubharald/simplehtr | ['optical character recognition', 'scene text recognition'] | ['An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition'] | src/create_lmdb.py src/main.py src/dataloader_iam.py src/model.py src/preprocessor.py DataLoaderIAM get_img_height validate write_summary get_img_size infer main FilePaths train Model DecoderType main Preprocessor process_batch validate train_set print train_batch write_summary get_img_size get_next Preprocessor get_iterator_info save has_next append float process_batch print get_img_size get_next eval Preprocessor get_iterator_info has_next validation_set infer_batch range len Batch print infer_batch get_img_size process_img Preprocessor IMREAD_GRAYSCALE imread validate batch_size img_file train_words line_mode ArgumentParser list data_dir Model parse_args DataLoaderIAM validation_words join read add_argument infer char_list write train show subplot transpose process_img imshow IMREAD_GRAYSCALE imread | # Handwritten Text Recognition with TensorFlow * **Update 2021/2: recognize text on line level (multiple words)** * **Update 2021/1: more robust model, faster dataloader, word beam search decoder also available for Windows** * **Update 2020: code is compatible with TF2** Handwritten Text Recognition (HTR) system implemented with TensorFlow (TF) and trained on the IAM off-line HTR dataset. The model takes **images of single words or text lines (multiple words) as input** and **outputs the recognized text**. 3/4 of the words from the validation-set are correctly recognized, and the character error rate is around 10%.  ## Run demo * Download one of the pretrained models | 2,183 |
gitting-guud/GML_Project | ['combinatorial optimization'] | ['Regret in Online Combinatorial Optimization'] | run_experiment.py Agents/FPL_agent.py Agents/Random_agent.py Agents/Exp2_agent.py Agents/CombUCB_agent.py Environment/Build.py Environment/Env.py Agents/Fixed_path_agent.py run_experiment run_experiment_compar get_orig_dest create_graph CombUCB_agent_part1 CombUCB_agent_part2 Exp2_agent_part2 Exp2_agent_part1 Fixed_Path_agent Random_agent Generate_Graph Graph Vertex Agent_to_graph_assignment cost_calculator run_experiment run_experiment_compar get_orig_dest create_graph CombUCB_agent_part1 CombUCB_agent_part2 Exp2_agent_part2 Exp2_agent_part1 Fixed_Path_agent Random_agent Generate_Graph Graph Vertex Agent_to_graph_assignment cost_calculator list choice append keys range print Generate_Graph build format get_optimal_paths print Agent_to_graph_assignment append range random_assignement enumerate get_optimal_paths Generate_Graph print build Agent_to_graph_assignment append range random_assignement enumerate | # GML_Project Semi-bandits for Decentralized Optimal Transport on a Graph By KILANI AL HOUCEINE and NAOUMI SALMANE, under the supervision of Mr SEZNEC Julien In this project, we consider a mass (e.g. people) transporting on a resistive graph.From a global perspective, a central planner might be interested in choosing eachroute to minimize the global cost (quadratic in each edge flow). The constraintthat each unit of mass starts from its starting node and successfully reaches itsendpoint can be formulated as a linear constraint. Hence, this is a standard convexoptimization problem. Nonetheless, in a decentralized setting, each player willchoose its route to minimize its local cost (linear in the flow of each visited edge).To do so, players are not allowed to communicate and are hence doomed to tryroutes sequentially. Moreover, at each try, all the players choose a route and onlyobserve the flow in the visited edges (semi-bandit feedback). The cost of a route istherefore dependent on the other players actions. Hence, rewards are not stochastic,but probably not fully adversarial as well. For the [agents](https://github.com/gitting-guud/GML_Project/tree/master/Agents) : - Stochastic agent : [UCB1](http://karthikabinavs.xyz/surveySemiBandit.pdf) - Adversarial agent : [Exp2](https://arxiv.org/pdf/1204.4710.pdf) - Hybdrid agent : [FPL-Trix](https://github.com/alohia/combandits/blob/master/report/report.pdf) For the [environments](https://github.com/gitting-guud/GML_Project/tree/master/Environment), we implemented the following graphs : - Random Sparse Graph : a fully connected graph where we discard randomly a fraction of the edges. | 2,184 |
gjacopo/poppysite | ['image retrieval'] | ['PlaNet - Photo Geolocation with Convolutional Neural Networks'] | poppysite/whscrape.py whscrape/items.py whscrape/pipelines.py whscrape/middlewares.py poppysite/__init__.py whscrape/spiders/__init__.py whscrape/settings.py whscrape/spiders/whsite.py _crawl run_crawler WhscrapeItem WhscrapeSpiderMiddleware WhscrapePipeline WHSSpider system Pool map UNESCO_URL | poppysite ======= Automated measurement of World Heritage sites attractiveness from social-sensing and processing of geotagged images in community-contributed photos collections. --- **About** **Description** **<a name="References"></a>References** * M. Alivand and H.H. Hochmair (2017): [Spatiotemporal analysis of photo contribution patterns to Panoramio and Flickr](https://www.tandfonline.com/doi/abs/10.1080/15230406.2016.1211489?journalCode=tcag20), Cartography and Geographic Information Science, 44(2):170-184, doi:[10.1080/15230406.2016.1211489](https://doi.org/10.1080/15230406.2016.1211489). * E. Spyrou and P. Mylonas (2016): [Analyzing Flickr metadata to extract location-based information and semantically organize its photo content](http://www.image.ece.ntua.gr/papers/841.pdf), _Neurocomputing_, 172:114-133, doi: [10.1016/j.neucom.2014.12.104](https://doi.org/10.1016/j.neucom.2014.12.104). * A. Sitthi, M. Nagai, M. Dailey, and S. Ninsawat (2016): [Exploring land use and land cover of geotagged social-sensing images using naive Bayes classifier](http://www.mdpi.com/2071-1050/8/9/921/pdf), _Sustainability_, 8:921, doi: [10.3390/su8090921](10.3390/su8090921). | 2,185 |
gkaposto/end2end_lishen | ['breast cancer detection'] | ['Deep Learning to Improve Breast Cancer Early Detection on Screening Mammography'] | dm_resnet.py ddsm_train/heatmap_score.py dm_inference.py dm_region.py ddsm_train/patch_clf_train.py dm_image.py dm_keras_ext.py dm_preprocess.py ddsm_train/sample_patches_combined.py ddsm_train/sample_patches_main.py meta.py dm_enet.py dm_multi_gpu.py ddsm_train/image_clf_train.py MultiViewDLElasticNet DLRepr DMImageDataGenerator get_roi_patches DMCandidROIIterator clust_kpts add_img_margins sweep_img_patches read_resize_img read_img_for_pred index_balancer get_prob_heatmap DMExamListIterator DMImgListIterator DMNumpyArrayIterator DMDirectoryIterator crop_img to_sparse pred_2view_img_list make_pred_case Yaroslav get_dl_model DMAucModelCheckpoint create_optimizer DMFlush flip_all_img robust_load_model do_3stage_training do_2stage_training DMMetrics load_dat_ram make_parallel DMImagePreprocessor region_features topK_region_idx total_area prob_heatmap_features global_max_intensity _shortcut basic_block_org _residual_block bottleneck_org add_top_layers _vgg_block ResNetBuilder basic_block _conv_bn_relu bottleneck MultiViewResNetBuilder main _bn_relu_conv DMMetaManager run run run create_blob_detector const_filename sample_blob_negatives crop_val overlap_patch_roi sample_hard_negatives sample_patches run create_blob_detector const_filename sample_blob_negatives crop_val overlap_patch_roi sample_hard_negatives sample_patches run zeros append concatenate sort choice zeros sum len int float astype IMREAD_GRAYSCALE resize IMREAD_UNCHANGED imread pixel_array equalizeHist astype read_resize_img shape zeros int ndarray isinstance pt zeros clip enumerate KMeans array fit int equalizeHist astype copy append float round range isinstance reshape add_img_margins sweep_img_patches read_resize_img segment_breast preprocess append zeros predict predict_on_batch mean stack append max columns from_records concat stack append prob_heatmap_features flip_axis load_model append next Model Input Yaroslav load_model print NNet output Model flush print DMFlush EarlyStopping compile fit_generator ReduceLROnPlateau append ModelCheckpoint flush flush layers isinstance print DMFlush EarlyStopping fit_generator ReduceLROnPlateau append ModelCheckpoint compile append outputs range len pop str list update region_features zeros_like topK_region_idx min enumerate append label keys range regionprops len add heatmap_layer model0 output add_vgg_blocks add Model Dense softmax add_fc_layers add_residual_blocks get_weights Input set_weights compile build_resnet_50 get_dl_model DMImageDataGenerator const_filename read_img_for_pred make_parallel get_prob_heatmap open add_top_layers tolist getenv shape append dump RandomState unique flush enumerate int set_index sort_index print reshape isfile ravel read_csv len DMAucModelCheckpoint do_2stage_training where flow save load_dat_ram load_model flow_from_directory load_weights calc_test_auc isinstance nb_sample do_3stage_training evaluate_generator insert str join append crop_val sum astype copy int SimpleBlobDetector_Params split RETR_TREE join int flush RandomState CHAIN_APPROX_SIMPLE print toimage findContours add_img_margins astype copy boundingRect save randint argmax moments split boundingRect save argmax RETR_TREE crop_val RandomState findContours astype copy moments flush join int CHAIN_APPROX_SIMPLE print toimage add_img_margins randint split permutation boundingRect save argmax RETR_TREE RandomState findContours astype copy detect moments flush join int CHAIN_APPROX_SIMPLE print toimage add_img_margins split join write_pat_list create_blob_detector startswith do_sampling get_loc train_test_split overlap_patch_roi | [](https://creativecommons.org/licenses/by-nc-nd/4.0/) # Deep Learning to Improve Breast Cancer Detection on Screening Mammography (End-to-end Training for Whole Image Breast Cancer Screening using An All Convolutional Design) Li Shen, Ph.D. CS Icahn School of Medicine at Mount Sinai New York, New York, USA  ## Introduction This is the companion site for our paper that was originally titled "End-to-end Training for Whole Image Breast Cancer Diagnosis using An All Convolutional Design" and was retitled as "Deep Learning to Improve Breast Cancer Detection on Screening Mammography". The paper has been published [here](https://rdcu.be/bPOYf). You may also find the arXiv version [here](https://arxiv.org/abs/1708.09427). This work was initially presented at the NIPS17 workshop on machine learning for health. Access the 4-page short paper [here](https://arxiv.org/abs/1711.05775). Download the [poster](https://raw.githubusercontent.com/lishen/end2end-all-conv/master/ddsm_train/NIPS17%20ML4H%20Poster.pdf). For our entry in the DREAM2016 Digital Mammography challenge, see this [write-up](https://www.synapse.org/LiShenDMChallenge). This work is much improved from our method used in the challenge. ## Whole image model downloads | 2,186 |
gkdziugaite/pacbayes-opt | ['generalization bounds'] | ['Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data'] | snn/core/parse_args.py snn/experiments/run_pacb.py snn/core/data_fn.py snn/core/mlp_fn.py snn/core/extra_fn.py snn/core/sgd.py snn/core/cnn.py setup.py snn/core/__init__.py snn/core/utils.py snn/experiments/run_sgd.py snn/core/cnn_fn.py snn/core/network.py snn/core/load_cifar_data.py snn/core/fc.py CNN CNN_withnoise l2_norm convolutional_net lazy_property variable_initializer weight_diff convolutional_net_init sample_label_from_dict load_cifar_data idx_to_label normalize_meanstd sample_label binarize_mnist_labels load_binary_mnist shuffledata label_indices sort_corresponding_to_labels label next_batch concat_labels load_mnist_data KLdiv_prime create_label_dictionary sort_corresponding_to_labels margin_loss idx_to_label generate_zero_noise sample_label KLdiv hoeffdingbnd label_indices concat_labels generate_noise SamplesConvBound label count_MLP_params sample_label_from_dict approximate_BPAC_bound Newt shuffledata next_batch FC load_data multilayer_perceptron multilayer_perceptron_init MLP_withnoise l2_norm lazy_property variable_initializer weight_diff Network BasicParser Interpreter CompleteParser label_indices SGD serialize deserialize run_pacb run_sgd __name__ lrn max_pool append lrn max_pool hstack print reshape mean sqrt reshape astype read_data_sets binarize_mnist_labels read_data_sets normalize_meanstd to_categorical load_data sample list arange dict array arange len label_indices ndarray isinstance label append normal append zeros append argmax max range len zip KLdiv KLdiv_prime Newt range sqrt log print log Newt hoeffdingbnd range append list range zip join str reshape transpose empty range load_batch zip zip join makedirs join PACB_init format optimize_PACB save_output evaluate_SNN_accuracy print optimize | # Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data This is an implementation of the PAC-Bayes generalization bound optimization for stochastic neural networks, as described in the article "[Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data](https://arxiv.org/pdf/1703.11008.pdf)" by Dziugaite and Roy, published in *Uncertainty in AI* (2017). ## Requirements Python 3.5: absl-py==0.11.0, astor==0.8.1, certifi==2020.6.20, gast==0.4.0, grpcio==1.33.2, h5py==2.10.0, importlib-metadata==2.0.0, Keras==2.2.2, Keras-Applications==1.0.4, Keras-Preprocessing==1.0.2, Markdown==3.2.2, numpy==1.14.5, protobuf==3.14.0, PyYAML==5.3.1, scipy==1.4.1, six==1.15.0, tensorboard==1.10.0, tensorflow==1.10.0, termcolor==1.1.0, Werkzeug==1.0.1, wincertstore==0.2, zipp==1.2.0 ## Instructions Running the code involves 2 steps: 1. SGD optimization, which saves initial and final network weights to `snn/experiments` 2. PAC-Bayes optimization, which first loads the weights saved in the previous step, and then optimizes the PAC-Bayes bound over the weights and variances. ### SGD Optimization To run SGD on a fully connected neural network consisting of a hidden layer with 600 neurons for 20 epochs on binary MNIST, execute the following command: | 2,187 |
gletarte/dichotomize-and-generalize | ['generalization bounds'] | ['Dichotomize and Generalize: PAC-Bayesian Binary Activated Deep Neural Networks'] | launch.py experiment.py pbgdeep/dataset_loader.py pbgdeep/utils.py pbgdeep/networks.py launch main DatasetLoader PBCombiOutputLayer PBCombiInputLayer PBCombiNet PBGLayer PBGOutputFunction BaselineNet PBCombiBaseLayer PBGNet PBGInputFunction PBCombiHiddenLayer PBGHiddenFunction bound MetricLogger accuracy MasterMetricLogger linear_loss get_logging_dir_name arange MasterMetricLogger set_priors SGD DataLoader ReduceLROnPlateau vstack device PBGNet DatasetLoader ModelCheckpoint idxmin MetricLogger set_device Adam Model TensorDataset load_state_dict append log_filename train_test_split state_dict format Experiment test Tanh init_weights pbgnet_testing startswith manual_seed item get_logging_dir_name load join check_random_state bound print EarlyStopping to_csv dict parameters BaselineNet evaluate_generator Tensor train read_csv PBCombiNet makedirs seed join sorted print ParameterGrid dict get_logging_dir_name invoke exists enumerate len sum clone sqrt exp log | # Dichotomize and Generalize: PAC-Bayesian Binary Activated Deep Neural Networks This repository contains an implementation of PBGNet (**P**AC-Bayesian **B**inary **G**radient **Net**work) and all related experiments presented in "[Dichotomize and Generalize: PAC-Bayesian Binary Activated Deep Neural Networks](https://papers.nips.cc/paper/8911-dichotomize-and-generalize-pac-bayesian-binary-activated-deep-neural-networks)" by Letarte, Germain, Guedj and Laviolette, accepted at *NeurIPS 2019*. ## Requirements - Python 3.6 - Numpy 1.14.3 - Pytorch 1.2.0 - Poutyne 1.2 - Scikit-learn 0.20.3 - Pandas 0.23.0 - Click 6.7 | 2,188 |
gletarte/pbrff | ['generalization bounds'] | ['Pseudo-Bayesian Learning with Kernel Fourier Transform as Prior'] | experiment.py dispatch.py pbrff/landmarks_selector.py pbrff/baseline.py pbrff/greedy_kernel.py pbrff/data_loader.py pbrff/landmarks_based.py main launch_slurm_experiment main learn_svm DataLoader compute_greedy_kernel GreedyKernelLearner compute_landmarks_based LandmarksBasedLearner compute_landmarks_selection LandmarksSelector call join print join launch_slurm_experiment makedirs cpu_count DataLoader ArgumentParser compute_loss dataset list n_cpu imap parse_args train_test_split GreedyKernelLearner update partial shuffle learn_svm sample_omega load items check_random_state ParameterGrid add_argument f1_score time GridSearchCV concatenate print fit SVC PredefinedSplit zeros accuracy_score predict len compute_pb_Q print compute_ok_Q learn_okrff learn_rff append learn_pbrff print LandmarksBasedLearner select_landmarks print compute_Q learn_rbf compute_loss append learn_pb | # Pseudo-Bayesian Learning with Kernel Fourier Transform as Prior This python code has been used to conduct the experiments presented in Section 6 of the following paper: > Gaël Letarte, Emilie Morvant, Pascal Germain. > Pseudo-Bayesian Learning with Kernel Fourier Transform as Prior http://proceedings.mlr.press/v89/letarte19a.html ## Content * ``experiment.py`` contains the code used to launch experiments and save the results in the ``results`` folder. * ``pbrff.ipynb`` is a _jupyter notebook_ to process the ``results`` and produce relevant figures. * ``pbrff/landmarks_based.py`` and ``pbrff/landmarks_selector.py`` implement algorithms used for **Landmarks-Based Learning** experiments (section 6.1). | 2,189 |
gmftbyGMFTBY/PONE | ['dialogue evaluation'] | ['PONE: A Novel Automatic Evaluation Metric for Open-Domain Generative Dialogue Systems'] | PONE/unreference_score.py PONE/hybird.py PONE/train_unreference.py PONE/reference_score.py PONE/bm25_utils.py PONE/utils.py similarity_bert/main.py PONE/es.py BM25Model ESUtils ESChat show model_get_score collection_result BERT_RUBER aggregate_scores load_word_embedding cal_vector_extrema cal_BLEU cal_ROUGE show_human read_human_score read_human_score_csv cal_embedding_average cal_greedy_matching BERT_RUBER_refer validation main train test BERT_RUBER_unrefer combine_da_data process_dataset get_batch get_batch_da normalization_np load_best_model cos_similarity get_weighted_matrix process_train_file process_da_src_file read_file cal_avf_performance generate_fluent_negative_1 normalize process_train_file_w2v generate_fluent_negative_2 get_da_label generate_bert_embeddings MineDataset packup init_bert_model collate_fn read_file generate_attention_mask rand array tqdm vecterize max sqrt zeros sum array len vecterize square add sqrt sum array vecterize all sqrt append zeros float sum array len method4 get_scores join Rouge print subshow append pearsonr spearmanr append read_file append read_csv_file model_get_score dataset BERT_RUBER cal_embedding_average list scores BERT_RUBER print cal_vector_extrema tqdm cal_BLEU cal_ROUGE zip append dataset cal_greedy_matching criterion backward clip_grad_norm_ zero_grad step from_numpy parameters is_available float BCELoss cuda net enumerate criterion from_numpy eval is_available float BCELoss cuda net enumerate validation print set_description floor save dataset cuda da_src_train get_batch_da load_best_model Adam weight_matrix load_state_dict append range state_dict BERT_RUBER_unrefer inf get_batch close test da_tgt_train is_available get_da_label validation time print tqdm parameters bert_size read_file pretrained_model model_name train min max min max int print load_state_dict listdir split seed print normalization_np BM25Model read_file numpy len seed ones_like arange concatenate print astype shuffle where choice BM25Model stack read_file softmax append float get_weighted range len sum concatenate print is_available numpy cuda net append len shuffle split len range split list concatenate print accumulate stack BertClient encode sum range append len cal norm T fill_diagonal print reshape matmul read_file print load_word2vec_format stack print concatenate combine_da_data print get_weighted_matrix process_train_file process_da_src_file tolist zeros_like from_pretrained cuda print is_available is_available pad_sequence cuda DataLoader concatenate print tqdm append generate_attention_mask numpy exists | # EPN-RUBER ## Introduction Implementation of PONE: A Feasible automatic evaluation of Open-Domain Dialog Systems with Enhancing positive and negative samples. Paper for ACL-2020 1. Based on the BERT-RUBER: Better Automatic Evaluation of Open-Domain Dialogue Systems with Contextualized Embeddings 2. Enhancing positive samples * OpenNMT-py * EDA Data Augmentation (lack of semantic diversity) | 2,190 |
gmkim90/AAS_enhancement | ['speech recognition', 'speech enhancement'] | ['Unpaired Speech Enhancement by Acoustic and Adversarial Supervision for Speech Recognition'] | Speech_enhancement_by_AAS/main.py Speech_enhancement_by_AAS/utils.py Speech_enhancement_by_AAS/data/make_manifest_librispeech.py Speech_enhancement_by_AAS/model.py Speech_enhancement_by_AAS/data/convert_numpy_to_pytorch.py Speech_enhancement_by_AAS/trainer_DCE.py AM_training/decoder.py AM_training/test.py Speech_enhancement_by_AAS/trainer_FSEGAN.py AM_training/model.py Speech_enhancement_by_AAS/loader_functions.py Speech_enhancement_by_AAS/trainer_acoustic.py Speech_enhancement_by_AAS/config.py AM_training/tune_decoder.py Speech_enhancement_by_AAS/trainer_AAS.py AM_training/utils.py AM_training/train.py Speech_enhancement_by_AAS/data_loader.py GreedyDecoder Decoder BeamCTCDecoder InferenceBatchSoftmax SequenceWise DeepSpeech_ken ResidualCNN4block ResidualDeepSpeech Lookahead BatchRNN str2bool str2bool result_callback getWER decode_dataset _get_variable_volatile _get_variable_nograd _get_variable AverageMeter check_config_used weights_init get_weight_statistic to_np add_argument_group get_config str2bool DataLoader FeatLoader FeatSampler _collate_fn _collate_fn_paired FeatDataset FeatLoader_paired main InferenceBatchSoftmax BRNNmultiCH BRNN SpeechClassifierRNN DeepSpeech SequenceWise stackedBRNN BatchRNN L1Loss_mask weights_init Trainer weights_init Trainer weights_init Trainer weights_init Trainer _get_variable_volatile _get_variable_nograd _get_variable AverageMeter check_config_used weights_init get_weight_statistic to_np random_combination decode dataset str FeatLoader len from_numpy wer append range format convert_to_strings GreedyDecoder float enumerate print error BeamCTCDecoder cer split append list read print close dict append vars keys open str list items hasattr batch_norm print bias _modules module range rnn len hasattr fill_ bias normal_ __name__ Variable cuda Variable cuda Variable cuda append parse_known_args print sorted fill_ FloatTensor size extend copy_ zero_ IntTensor zeros float range len sorted fill_ FloatTensor size extend copy_ zero_ IntTensor zeros float range len visualize str expnum set_device simul_real Trainer test DataLoader manual_seed random_seed train gpu makedirs sorted list tuple sample range len | # AAS_enhancement This repository contains the code and supplementary result for the paper "Unpaired Speech Enhancement by Acoustic and Adversarial Supervision" (IEEE Signal Processing Letters, 2019). paper available at https://arxiv.org/abs/1811.02182 ## Common setting 1. Install Warp-CTC, ctcdecode (See https://github.com/SeanNaren/deepspeech.pytorch/#installation). 2. Install kenLM (See https://github.com/kpu/kenlm) and download 4-gram trained on librispeech in [here] (http://www.openslr.org/resources/11/4-gram.arpa.gz) ## Part 1. Pre-train acoustic model on clean speech Code for this part is originated from https://github.com/SeanNaren/deepspeech.pytorch/. We modify the feature from spectrogram to log-Mel filterbank output (LMFB), and 2D convolutional layer to 1D convolutional layer. | 2,191 |
gmlee329/InstanceShadowDetection | ['shadow detection'] | ['Instance Shadow Detection'] | build/lib.linux-x86_64-3.6/detectron2/evaluation/testing.py build/lib.linux-x86_64-3.6/detectron2/export/caffe2_inference.py build/lib.linux-x86_64-3.6/detectron2/detectron2/config/compat.py detectron2/detectron2/data/samplers/distributed_sampler.py detectron2/utils/collect_env.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/roi_heads/cascade_rcnn.py detectron2/data/datasets/register_soba.py detectron2/data/detection_utils.py detectron2/modeling/proposal_generator/rpn_outputs.py detectron2/detectron2/modeling/anchor_generator.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/samplers/distributed_sampler.py build/lib.linux-x86_64-3.6/detectron2/detectron2/evaluation/lvis_evaluation.py detectron2/model_zoo/__init__.py build/lib.linux-x86_64-3.6/detectron2/export/caffe2_modeling.py detectron2/evaluation/sem_seg_evaluation.py detectron2/evaluation/lvis_evaluation.py detectron2/detectron2/modeling/postprocessing.py detectron2/utils/registry.py detectron2/modeling/backbone/backbone.py PythonAPI/build/lib.linux-x86_64-3.6/pysobatools/sobaeval.py build/lib.linux-x86_64-3.6/detectron2/detectron2/layers/wrappers.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/build.py detectron2/detectron2/modeling/roi_heads/roi_heads.py detectron2/utils/logger.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/backbone/fpn.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/meta_arch/panoptic_fpn.py detectron2/modeling/meta_arch/retinanet.py detectron2/export/caffe2_export.py build/lib.linux-x86_64-3.6/detectron2/detectron2/solver/build.py detectron2/export/shared.py detectron2/layers/roi_align.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/anchor_generator.py detectron2/detectron2/data/datasets/lvis.py detectron2/data/samplers/distributed_sampler.py build/lib.linux-x86_64-3.6/detectron2/modeling/roi_heads/fast_rcnn.py detectron2/checkpoint/model_zoo.py detectron2/modeling/matchor.py projects/LISA/SOAP.py build/lib.linux-x86_64-3.6/detectron2/detectron2/evaluation/coco_evaluation.py build/lib.linux-x86_64-3.6/detectron2/config/__init__.py build/lib.linux-x86_64-3.6/detectron2/data/datasets/lvis_v0_5_categories.py build/lib.linux-x86_64-3.6/detectron2/engine/__init__.py detectron2/detectron2/utils/collect_env.py detectron2/detectron2/data/datasets/builtin_meta.py build/lib.linux-x86_64-3.6/detectron2/evaluation/evaluation/testing.py detectron2/evaluation/testing.py detectron2/structures/instances.py build/lib.linux-x86_64-3.6/detectron2/detectron2/layers/roi_align_rotated.py build/lib.linux-x86_64-3.6/detectron2/modeling/backbone/fpn.py build/lib.linux-x86_64-3.6/detectron2/modeling/roi_heads/keypoint_head.py detectron2/detectron2/utils/comm.py build/lib.linux-x86_64-3.6/detectron2/detectron2/evaluation/panoptic_evaluation.py detectron2/export/c10.py detectron2/export/__init__.py detectron2/detectron2/modeling/backbone/backbone.py build/lib.linux-x86_64-3.6/detectron2/detectron2/utils/video_visualizer.py build/lib.linux-x86_64-3.6/detectron2/detectron2/evaluation/testing.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/datasets/__init__.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/proposal_generator/rpn.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/roi_heads/roi_heads.py build/lib.linux-x86_64-3.6/detectron2/modeling/roi_heads/__init__.py detectron2/detectron2/evaluation/cityscapes_evaluation.py build/lib.linux-x86_64-3.6/detectron2/data/transforms/transform_gen.py build/lib.linux-x86_64-3.6/detectron2/modeling/matchor.py detectron2/detectron2/data/catalog.py PythonAPI/pysobatools/cocoeval.py detectron2/detectron2/evaluation/testing.py detectron2/data/datasets/coco.py build/lib.linux-x86_64-3.6/detectron2/layers/shape_spec.py build/lib.linux-x86_64-3.6/detectron2/detectron2/checkpoint/c2_model_loading.py detectron2/detectron2/layers/wrappers.py projects/LISA/LISA/config.py build/lib.linux-x86_64-3.6/detectron2/data/transforms/transform.py build/lib.linux-x86_64-3.6/detectron2/detectron2/engine/defaults.py projects/LISA/train_net.py build/lib.linux-x86_64-3.6/detectron2/utils/visualizer.py detectron2/utils/serialize.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/backbone/__init__.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/meta_arch/semantic_seg.py detectron2/detectron2/data/transforms/__init__.py detectron2/detectron2/utils/__init__.py detectron2/engine/train_loop.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/postprocessing.py detectron2/data/datasets/cityscapes.py build/lib.linux-x86_64-3.6/detectron2/detectron2/utils/__init__.py build/lib.linux-x86_64-3.6/detectron2/detectron2/solver/lr_scheduler.py build/lib.linux-x86_64-3.6/detectron2/data/datasets/register_soba.py build/lib.linux-x86_64-3.6/detectron2/data/build.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/datasets/builtin.py build/lib.linux-x86_64-3.6/detectron2/detectron2/structures/masks.py detectron2/detectron2/modeling/backbone/build.py detectron2/detectron2/checkpoint/c2_model_loading.py build/lib.linux-x86_64-3.6/detectron2/utils/collect_env.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/proposal_generator/rrpn_outputs.py PythonAPI/build/lib.linux-x86_64-3.6/pysobatools/mask.py detectron2/detectron2/checkpoint/__init__.py tests/test_nms_rotated.py detectron2/model_zoo/model_zoo.py detectron2/detectron2/data/samplers/grouped_batch_sampler.py build/lib.linux-x86_64-3.6/detectron2/modeling/proposal_generator/LISA_rpn.py detectron2/evaluation/rotated_coco_evaluation.py build/lib.linux-x86_64-3.6/detectron2/modeling/roi_heads/roi_heads.py build/lib.linux-x86_64-3.6/detectron2/structures/instances.py detectron2/modeling/roi_heads/__init__.py detectron2/detectron2/solver/lr_scheduler.py detectron2/modeling/proposal_generator/__init__.py detectron2/detectron2/layers/roi_align_rotated.py detectron2/detectron2/structures/rotated_boxes.py detectron2/detectron2/modeling/roi_heads/fast_rcnn.py build/lib.linux-x86_64-3.6/detectron2/modeling/postprocessing.py build/lib.linux-x86_64-3.6/detectron2/data/transforms/__init__.py detectron2/__init__.py detectron2/data/datasets/register_coco.py PythonAPI/pysobatools/mask.py build/lib.linux-x86_64-3.6/detectron2/export/patcher.py build/lib.linux-x86_64-3.6/detectron2/modeling/poolers.py tests/test_rotated_boxes.py build/lib.linux-x86_64-3.6/detectron2/modeling/backbone/backbone.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/roi_heads/mask_head.py detectron2/detectron2/config/config.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/datasets/pascal_voc.py detectron2/structures/masks.py detectron2/solver/__init__.py build/lib.linux-x86_64-3.6/detectron2/utils/events.py projects/LISA/visualize_data.py detectron2/modeling/meta_arch/LISA_meta_arch.py build/lib.linux-x86_64-3.6/detectron2/detectron2/config/defaults.py build/lib.linux-x86_64-3.6/detectron2/layers/__init__.py detectron2/modeling/backbone/resnet.py detectron2/detectron2/modeling/meta_arch/build.py tests/test_box2box_transform.py build/lib.linux-x86_64-3.6/detectron2/evaluation/cityscapes_evaluation.py detectron2/detectron2/structures/instances.py detectron2/detectron2/modeling/roi_heads/mask_head.py detectron2/utils/colormap.py detectron2/evaluation/evaluation/evaluator.py detectron2/modeling/meta_arch/panoptic_fpn.py tests/test_config.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/proposal_generator/build.py detectron2/data/datasets/lvis_v0_5_categories.py detectron2/modeling/poolers.py build/lib.linux-x86_64-3.6/detectron2/evaluation/coco_evaluation.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/sampling.py build/lib.linux-x86_64-3.6/detectron2/detectron2/evaluation/cityscapes_evaluation.py PythonAPI/pysobatools/soba.py projects/LISA/server.py detectron2/evaluation/cityscapes_evaluation.py detectron2/evaluation/evaluation/pascal_voc_evaluation.py detectron2/detectron2/data/datasets/lvis_v0_5_categories.py build/lib.linux-x86_64-3.6/detectron2/detectron2/config/config.py build/lib.linux-x86_64-3.6/detectron2/modeling/roi_heads/rotated_fast_rcnn.py build/lib.linux-x86_64-3.6/detectron2/modeling/roi_heads/cascade_rcnn.py projects/LISA/visualize_json_results.py detectron2/checkpoint/__init__.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/box_regression.py detectron2/detectron2/modeling/meta_arch/panoptic_fpn.py detectron2/detectron2/structures/image_list.py detectron2/detectron2/layers/roi_align.py build/lib.linux-x86_64-3.6/detectron2/data/datasets/register_coco.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/transforms/transform_gen.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/meta_arch/rcnn.py build/lib.linux-x86_64-3.6/detectron2/detectron2/layers/rotated_boxes.py build/lib.linux-x86_64-3.6/detectron2/detectron2/engine/launch.py build/lib.linux-x86_64-3.6/detectron2/model_zoo/model_zoo.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/dataset_mapper.py build/lib.linux-x86_64-3.6/detectron2/layers/rotated_boxes.py build/lib.linux-x86_64-3.6/detectron2/detectron2/evaluation/sem_seg_evaluation.py build/lib.linux-x86_64-3.6/detectron2/data/samplers/grouped_batch_sampler.py build/lib.linux-x86_64-3.6/detectron2/detectron2/utils/serialize.py detectron2/detectron2/layers/shape_spec.py detectron2/detectron2/utils/video_visualizer.py detectron2/modeling/roi_heads/mask_head.py detectron2/detectron2/layers/deform_conv.py detectron2/detectron2/modeling/test_time_augmentation.py build/lib.linux-x86_64-3.6/detectron2/model_zoo/__init__.py detectron2/detectron2/data/detection_utils.py build/lib.linux-x86_64-3.6/detectron2/modeling/backbone/build.py build/lib.linux-x86_64-3.6/detectron2/structures/rotated_boxes.py detectron2/modeling/sampling.py build/lib.linux-x86_64-3.6/detectron2/evaluation/evaluation/sem_seg_evaluation.py build/lib.linux-x86_64-3.6/detectron2/detectron2/solver/__init__.py detectron2/evaluation/panoptic_evaluation.py detectron2/detectron2/config/compat.py detectron2/detectron2/config/__init__.py docs/conf.py detectron2/utils/comm.py detectron2/detectron2/config/defaults.py build/lib.linux-x86_64-3.6/detectron2/solver/lr_scheduler.py build/lib.linux-x86_64-3.6/detectron2/modeling/box_regression.py build/lib.linux-x86_64-3.6/detectron2/data/datasets/builtin.py build/lib.linux-x86_64-3.6/detectron2/layers/roi_align_rotated.py detectron2/modeling/box_regression.py build/lib.linux-x86_64-3.6/detectron2/detectron2/utils/events.py detectron2/config/__init__.py build/lib.linux-x86_64-3.6/detectron2/detectron2/utils/comm.py projects/LISA/utils.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/proposal_generator/rrpn.py detectron2/modeling/roi_heads/box_head.py detectron2/modeling/meta_arch/__init__.py detectron2/solver/build.py build/lib.linux-x86_64-3.6/detectron2/engine/hooks.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/datasets/builtin_meta.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/backbone/resnet.py detectron2/data/transforms/__init__.py PythonAPI/setup.py detectron2/detectron2/structures/boxes.py build/lib.linux-x86_64-3.6/detectron2/data/datasets/cityscapes.py tests/test_roi_pooler.py build/lib.linux-x86_64-3.6/detectron2/data/detection_utils.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/__init__.py build/lib.linux-x86_64-3.6/detectron2/detectron2/structures/__init__.py build/lib.linux-x86_64-3.6/detectron2/export/caffe2_export.py build/lib.linux-x86_64-3.6/detectron2/modeling/proposal_generator/build.py detectron2/detectron2/structures/masks.py detectron2/detectron2/structures/__init__.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/datasets/register_soba.py build/lib.linux-x86_64-3.6/detectron2/modeling/roi_heads/box_head.py build/lib.linux-x86_64-3.6/detectron2/__init__.py build/lib.linux-x86_64-3.6/detectron2/modeling/roi_heads/mask_head.py detectron2/detectron2/data/transforms/transform.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/transforms/transform.py detectron2/detectron2/utils/env.py detectron2/modeling/matcher.py detectron2/detectron2/modeling/proposal_generator/__init__.py build/lib.linux-x86_64-3.6/detectron2/detectron2/layers/batch_norm.py detectron2/modeling/proposal_generator/proposal_utils.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/roi_heads/fast_rcnn.py build/lib.linux-x86_64-3.6/detectron2/utils/colormap.py detectron2/modeling/roi_heads/cascade_rcnn.py detectron2/detectron2/modeling/meta_arch/rcnn.py build/lib.linux-x86_64-3.6/detectron2/detectron2/engine/train_loop.py build/lib.linux-x86_64-3.6/detectron2/structures/image_list.py detectron2/data/transforms/transform.py detectron2/detectron2/utils/registry.py build/lib.linux-x86_64-3.6/detectron2/detectron2/evaluation/pascal_voc_evaluation.py detectron2/detectron2/data/datasets/register_soba.py projects/LISA/__init__.py detectron2/detectron2/data/build.py build/lib.linux-x86_64-3.6/detectron2/data/datasets/soba.py build/lib.linux-x86_64-3.6/detectron2/export/api.py detectron2/modeling/roi_heads/keypoint_head.py build/lib.linux-x86_64-3.6/detectron2/modeling/anchor_generator.py tests/test_anchor_generator.py build/lib.linux-x86_64-3.6/detectron2/detectron2/checkpoint/__init__.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/__init__.py build/lib.linux-x86_64-3.6/detectron2/utils/comm.py projects/LISA/LISA/LISA_meta_arch.py build/lib.linux-x86_64-3.6/detectron2/modeling/proposal_generator/__init__.py build/lib.linux-x86_64-3.6/detectron2/modeling/meta_arch/panoptic_fpn.py build/lib.linux-x86_64-3.6/detectron2/utils/video_visualizer.py detectron2/detectron2/data/datasets/register_coco.py detectron2/detectron2/modeling/backbone/resnet.py detectron2/checkpoint/c2_model_loading.py build/lib.linux-x86_64-3.6/detectron2/layers/nms.py build/lib.linux-x86_64-3.6/detectron2/export/c10.py detectron2/detectron2/data/transforms/transform_gen.py detectron2/detectron2/engine/__init__.py build/lib.linux-x86_64-3.6/detectron2/checkpoint/__init__.py build/lib.linux-x86_64-3.6/detectron2/evaluation/soba_evaluation.py build/lib.linux-x86_64-3.6/detectron2/evaluation/pascal_voc_evaluation.py build/lib.linux-x86_64-3.6/detectron2/detectron2/config/__init__.py build/lib.linux-x86_64-3.6/detectron2/data/datasets/__init__.py build/lib.linux-x86_64-3.6/detectron2/detectron2/structures/keypoints.py build/lib.linux-x86_64-3.6/detectron2/layers/roi_align.py detectron2/detectron2/data/dataset_mapper.py detectron2/detectron2/utils/colormap.py detectron2/detectron2/modeling/sampling.py detectron2/evaluation/evaluation/panoptic_evaluation.py projects/LISA/defaults.py detectron2/layers/rotated_boxes.py build/lib.linux-x86_64-3.6/detectron2/data/datasets/builtin_meta.py detectron2/detectron2/engine/hooks.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/datasets/lvis_v0_5_categories.py build/lib.linux-x86_64-3.6/detectron2/detectron2/utils/collect_env.py detectron2/evaluation/pascal_voc_evaluation.py detectron2/modeling/backbone/build.py build/lib.linux-x86_64-3.6/detectron2/detectron2/layers/__init__.py detectron2/modeling/__init__.py detectron2/utils/env.py detectron2/detectron2/modeling/meta_arch/__init__.py detectron2/layers/shape_spec.py build/lib.linux-x86_64-3.6/detectron2/layers/mask_ops.py detectron2/data/__init__.py detectron2/evaluation/evaluator.py detectron2/detectron2/layers/rotated_boxes.py detectron2/modeling/proposal_generator/build.py detectron2/utils/visualizer.py build/lib.linux-x86_64-3.6/detectron2/detectron2/evaluation/evaluator.py detectron2/detectron2/modeling/box_regression.py detectron2/detectron2/data/common.py detectron2/modeling/proposal_generator/rpn.py build/lib.linux-x86_64-3.6/detectron2/modeling/backbone/__init__.py tests/test_data_loader.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/catalog.py detectron2/detectron2/layers/batch_norm.py detectron2/modeling/postprocessing.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/meta_arch/retinanet.py detectron2/config/compat.py build/lib.linux-x86_64-3.6/detectron2/evaluation/lvis_evaluation.py build/lib.linux-x86_64-3.6/detectron2/checkpoint/detection_checkpoint.py detectron2/detectron2/structures/keypoints.py detectron2/detectron2/utils/visualizer.py detectron2/detectron2/utils/logger.py build/lib.linux-x86_64-3.6/detectron2/modeling/meta_arch/__init__.py PythonAPI/pysobatools/sobaeval.py detectron2/detectron2/evaluation/coco_evaluation.py detectron2/evaluation/evaluation/cityscapes_evaluation.py detectron2/detectron2/__init__.py tests/test_boxes.py build/lib.linux-x86_64-3.6/detectron2/detectron2/checkpoint/detection_checkpoint.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/datasets/coco.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/datasets/register_coco.py detectron2/data/samplers/__init__.py detectron2/detectron2/modeling/matcher.py detectron2/modeling/meta_arch/semantic_seg.py build/lib.linux-x86_64-3.6/detectron2/evaluation/evaluation/cityscapes_evaluation.py detectron2/detectron2/modeling/__init__.py build/lib.linux-x86_64-3.6/detectron2/detectron2/utils/env.py detectron2/data/samplers/grouped_batch_sampler.py detectron2/utils/events.py setup.py detectron2/layers/roi_align_rotated.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/detection_utils.py detectron2/detectron2/data/datasets/builtin.py build/lib.linux-x86_64-3.6/detectron2/data/common.py build/lib.linux-x86_64-3.6/detectron2/modeling/roi_heads/LISA_rcnn.py detectron2/evaluation/soba_evaluation.py detectron2/utils/__init__.py build/lib.linux-x86_64-3.6/detectron2/modeling/__init__.py detectron2/detectron2/evaluation/panoptic_evaluation.py projects/LISA/LISA/matchor.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/proposal_generator/rpn_outputs.py detectron2/layers/__init__.py detectron2/modeling/roi_heads/roi_heads.py detectron2/data/transforms/transform_gen.py detectron2/detectron2/modeling/backbone/fpn.py detectron2/detectron2/evaluation/evaluator.py detectron2/detectron2/evaluation/sem_seg_evaluation.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/backbone/backbone.py build/lib.linux-x86_64-3.6/detectron2/modeling/proposal_generator/rpn_outputs.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/samplers/grouped_batch_sampler.py build/lib.linux-x86_64-3.6/detectron2/structures/keypoints.py projects/LISA/setModel.py build/lib.linux-x86_64-3.6/detectron2/utils/__init__.py detectron2/detectron2/modeling/poolers.py build/lib.linux-x86_64-3.6/detectron2/detectron2/engine/hooks.py detectron2/solver/lr_scheduler.py detectron2/data/datasets/pascal_voc.py detectron2/export/caffe2_modeling.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/proposal_generator/proposal_utils.py detectron2/data/catalog.py build/lib.linux-x86_64-3.6/detectron2/engine/train_loop.py build/lib.linux-x86_64-3.6/detectron2/engine/launch.py detectron2/modeling/proposal_generator/rrpn_outputs.py build/lib.linux-x86_64-3.6/detectron2/detectron2/layers/nms.py build/lib.linux-x86_64-3.6/detectron2/modeling/meta_arch/semantic_seg.py detectron2/detectron2/evaluation/lvis_evaluation.py detectron2/config/defaults.py build/lib.linux-x86_64-3.6/detectron2/detectron2/layers/roi_align.py detectron2/detectron2/modeling/roi_heads/__init__.py detectron2/modeling/meta_arch/rcnn.py detectron2/detectron2/checkpoint/model_zoo.py detectron2/evaluation/__init__.py detectron2/detectron2/modeling/roi_heads/box_head.py build/lib.linux-x86_64-3.6/detectron2/modeling/meta_arch/build.py build/lib.linux-x86_64-3.6/detectron2/detectron2/utils/visualizer.py detectron2/checkpoint/detection_checkpoint.py detectron2/detectron2/evaluation/pascal_voc_evaluation.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/meta_arch/build.py detectron2/detectron2/modeling/proposal_generator/rpn_outputs.py detectron2/detectron2/layers/mask_ops.py detectron2/export/api.py detectron2/structures/keypoints.py detectron2/detectron2/modeling/proposal_generator/rrpn.py build/lib.linux-x86_64-3.6/detectron2/detectron2/utils/colormap.py detectron2/config/config.py build/lib.linux-x86_64-3.6/detectron2/evaluation/sem_seg_evaluation.py build/lib.linux-x86_64-3.6/detectron2/data/samplers/__init__.py build/lib.linux-x86_64-3.6/detectron2/detectron2/checkpoint/model_zoo.py detectron2/data/dataset_mapper.py build/lib.linux-x86_64-3.6/detectron2/solver/__init__.py PythonAPI/build/lib.linux-x86_64-3.6/pysobatools/__init__.py build/lib.linux-x86_64-3.6/detectron2/detectron2/utils/registry.py build/lib.linux-x86_64-3.6/detectron2/evaluation/evaluation/panoptic_evaluation.py build/lib.linux-x86_64-3.6/detectron2/modeling/meta_arch/LISA_meta_arch.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/proposal_generator/__init__.py detectron2/modeling/proposal_generator/rrpn.py build/lib.linux-x86_64-3.6/detectron2/modeling/sampling.py build/lib.linux-x86_64-3.6/detectron2/data/datasets/pascal_voc.py build/lib.linux-x86_64-3.6/detectron2/config/defaults.py build/lib.linux-x86_64-3.6/detectron2/modeling/meta_arch/retinanet.py build/lib.linux-x86_64-3.6/detectron2/detectron2/__init__.py tests/test_roi_heads.py demo/predictor.py detectron2/data/datasets/__init__.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/roi_heads/keypoint_head.py detectron2/detectron2/engine/train_loop.py build/lib.linux-x86_64-3.6/detectron2/config/compat.py build/lib.linux-x86_64-3.6/detectron2/modeling/meta_arch/rcnn.py detectron2/data/build.py build/lib.linux-x86_64-3.6/detectron2/data/__init__.py build/lib.linux-x86_64-3.6/detectron2/data/samplers/distributed_sampler.py build/lib.linux-x86_64-3.6/detectron2/detectron2/structures/boxes.py build/lib.linux-x86_64-3.6/detectron2/evaluation/panoptic_evaluation.py tests/test_rpn.py build/lib.linux-x86_64-3.6/detectron2/engine/defaults.py build/lib.linux-x86_64-3.6/detectron2/modeling/backbone/resnet.py detectron2/detectron2/evaluation/soba_evaluation.py build/lib.linux-x86_64-3.6/detectron2/modeling/proposal_generator/rrpn_outputs.py build/lib.linux-x86_64-3.6/detectron2/modeling/matcher.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/meta_arch/__init__.py detectron2/modeling/meta_arch/build.py build/lib.linux-x86_64-3.6/detectron2/structures/boxes.py detectron2/evaluation/evaluation/sem_seg_evaluation.py build/lib.linux-x86_64-3.6/detectron2/detectron2/structures/rotated_boxes.py PythonAPI/pysobatools/__init__.py detectron2/evaluation/coco_evaluation.py detectron2/utils/memory.py tests/test_visualizer.py build/lib.linux-x86_64-3.6/detectron2/detectron2/layers/mask_ops.py projects/LISA/demo.py detectron2/detectron2/data/datasets/__init__.py build/lib.linux-x86_64-3.6/detectron2/detectron2/engine/__init__.py build/lib.linux-x86_64-3.6/detectron2/layers/batch_norm.py build/lib.linux-x86_64-3.6/detectron2/evaluation/evaluation/coco_evaluation.py detectron2/modeling/backbone/fpn.py tests/test_roi_align_rotated.py datasets/prepare_panoptic_fpn.py detectron2/detectron2/modeling/meta_arch/retinanet.py build/lib.linux-x86_64-3.6/detectron2/utils/env.py detectron2/evaluation/evaluation/__init__.py build/lib.linux-x86_64-3.6/detectron2/detectron2/structures/image_list.py tests/test_checkpoint.py detectron2/structures/image_list.py detectron2/evaluation/evaluation/coco_evaluation.py build/lib.linux-x86_64-3.6/detectron2/data/datasets/lvis.py detectron2/detectron2/data/datasets/cityscapes.py build/lib.linux-x86_64-3.6/detectron2/evaluation/evaluator.py build/lib.linux-x86_64-3.6/detectron2/evaluation/evaluation/__init__.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/roi_heads/box_head.py detectron2/detectron2/layers/__init__.py detectron2/modeling/anchor_generator.py projects/LISA/LISA/LISA_rcnn.py detectron2/detectron2/data/datasets/coco.py build/lib.linux-x86_64-3.6/detectron2/modeling/proposal_generator/rrpn.py detectron2/detectron2/checkpoint/detection_checkpoint.py build/lib.linux-x86_64-3.6/detectron2/detectron2/utils/logger.py detectron2/evaluation/evaluation/testing.py build/lib.linux-x86_64-3.6/detectron2/evaluation/evaluation/lvis_evaluation.py detectron2/modeling/roi_heads/fast_rcnn.py detectron2/detectron2/utils/serialize.py detectron2/data/datasets/lvis.py detectron2/detectron2/modeling/roi_heads/keypoint_head.py detectron2/detectron2/modeling/proposal_generator/rrpn_outputs.py detectron2/structures/__init__.py build/lib.linux-x86_64-3.6/detectron2/utils/registry.py detectron2/modeling/roi_heads/rotated_fast_rcnn.py detectron2/detectron2/modeling/backbone/__init__.py detectron2/detectron2/modeling/roi_heads/cascade_rcnn.py build/lib.linux-x86_64-3.6/detectron2/config/config.py detectron2/utils/video_visualizer.py detectron2/detectron2/engine/launch.py detectron2/data/datasets/soba.py detectron2/detectron2/engine/defaults.py build/lib.linux-x86_64-3.6/detectron2/checkpoint/model_zoo.py detectron2/modeling/backbone/__init__.py detectron2/modeling/roi_heads/LISA_rcnn.py projects/LISA/predictor.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/backbone/build.py build/lib.linux-x86_64-3.6/detectron2/structures/__init__.py detectron2/detectron2/solver/__init__.py detectron2/detectron2/data/__init__.py build/lib.linux-x86_64-3.6/detectron2/checkpoint/c2_model_loading.py detectron2/modeling/test_time_augmentation.py tests/test_roi_align.py detectron2/detectron2/modeling/proposal_generator/build.py build/lib.linux-x86_64-3.6/detectron2/modeling/proposal_generator/proposal_utils.py detectron2/detectron2/modeling/proposal_generator/rpn.py build/lib.linux-x86_64-3.6/detectron2/evaluation/rotated_coco_evaluation.py demo/demo.py build/lib.linux-x86_64-3.6/detectron2/modeling/proposal_generator/rpn.py detectron2/detectron2/evaluation/__init__.py detectron2/export/patcher.py detectron2/export/caffe2_inference.py tests/test_model_e2e.py detectron2/checkpoint/catalog.py detectron2/layers/batch_norm.py build/lib.linux-x86_64-3.6/detectron2/evaluation/__init__.py PythonAPI/build/lib.linux-x86_64-3.6/pysobatools/cocoeval.py tests/test_sampler.py build/lib.linux-x86_64-3.6/detectron2/layers/deform_conv.py build/lib.linux-x86_64-3.6/detectron2/data/datasets/coco.py detectron2/evaluation/evaluation/lvis_evaluation.py detectron2/detectron2/solver/build.py build/lib.linux-x86_64-3.6/detectron2/detectron2/evaluation/soba_evaluation.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/datasets/cityscapes.py detectron2/engine/__init__.py build/lib.linux-x86_64-3.6/detectron2/data/dataset_mapper.py build/lib.linux-x86_64-3.6/detectron2/evaluation/evaluation/pascal_voc_evaluation.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/test_time_augmentation.py detectron2/data/datasets/builtin.py detectron2/detectron2/data/samplers/__init__.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/datasets/lvis.py detectron2/detectron2/data/datasets/pascal_voc.py build/lib.linux-x86_64-3.6/detectron2/export/shared.py build/lib.linux-x86_64-3.6/detectron2/data/catalog.py detectron2/engine/hooks.py detectron2/layers/mask_ops.py detectron2/data/datasets/builtin_meta.py tests/test_data_transform.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/matcher.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/transforms/__init__.py build/lib.linux-x86_64-3.6/detectron2/utils/memory.py build/lib.linux-x86_64-3.6/detectron2/utils/serialize.py build/lib.linux-x86_64-3.6/detectron2/modeling/test_time_augmentation.py tests/test_model_zoo.py build/lib.linux-x86_64-3.6/detectron2/utils/logger.py PythonAPI/build/lib.linux-x86_64-3.6/pysobatools/soba.py tests/test_mask_ops.py detectron2/layers/nms.py build/lib.linux-x86_64-3.6/detectron2/detectron2/layers/shape_spec.py detectron2/modeling/proposal_generator/LISA_rpn.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/common.py build/lib.linux-x86_64-3.6/detectron2/solver/build.py build/lib.linux-x86_64-3.6/detectron2/detectron2/layers/deform_conv.py build/lib.linux-x86_64-3.6/detectron2/evaluation/evaluation/evaluator.py detectron2/layers/wrappers.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/samplers/__init__.py projects/LISA/LISA/__init__.py detectron2/engine/defaults.py build/lib.linux-x86_64-3.6/detectron2/layers/wrappers.py projects/LISA/LISA/LISA_rpn.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/roi_heads/__init__.py detectron2/layers/deform_conv.py build/lib.linux-x86_64-3.6/detectron2/checkpoint/catalog.py detectron2/structures/rotated_boxes.py build/lib.linux-x86_64-3.6/detectron2/detectron2/structures/instances.py build/lib.linux-x86_64-3.6/detectron2/structures/masks.py build/lib.linux-x86_64-3.6/detectron2/detectron2/evaluation/__init__.py detectron2/detectron2/modeling/meta_arch/semantic_seg.py build/lib.linux-x86_64-3.6/detectron2/detectron2/data/datasets/soba.py detectron2/detectron2/data/datasets/soba.py tests/test_fast_rcnn.py detectron2/detectron2/modeling/proposal_generator/proposal_utils.py build/lib.linux-x86_64-3.6/detectron2/detectron2/modeling/poolers.py detectron2/detectron2/utils/events.py detectron2/detectron2/layers/nms.py detectron2/engine/launch.py build/lib.linux-x86_64-3.6/detectron2/export/__init__.py tests/__init__.py detectron2/structures/boxes.py detectron2/data/common.py get_model_zoo_configs get_extensions get_version convert_c2_detectron_names convert_basic_c2_names align_and_update_state_dicts Detectron2Handler ModelCatalogHandler ModelCatalog DetectionCheckpointer Detectron2Handler ModelCatalogHandler ModelCatalog upgrade_config downgrade_config _rename _RenameConverter ConverterV2 ConverterV1 guess_version set_global_cfg get_cfg CfgNode build_batch_data_sampler print_instances_class_histogram build_detection_test_loader filter_images_with_few_keypoints build_detection_train_loader load_proposals_into_dataset worker_init_reset_seed get_detection_dataset_dicts trivial_batch_collator filter_images_with_only_crowd_annotations _quantize MetadataCatalog DatasetCatalog Metadata DatasetFromList MapDataset DatasetMapper check_metadata_consistency annotations_to_instances transform_instance_annotations annotations_to_instances_rotated transform_proposals gen_crop_transform_with_instance build_transform_gen check_image_size create_keypoint_hflip_indices SizeMismatchError filter_empty_instances transform_keypoint_annotations read_image register_all_coco register_all_cityscapes register_all_lvis register_all_pascal_voc _get_coco_instances_meta _get_builtin_metadata _get_coco_panoptic_separated_meta load_cityscapes_instances load_cityscapes_semantic cityscapes_files_to_dict load_sem_seg load_coco_json register_lvis_instances get_lvis_instances_meta _get_lvis_instances_meta_v0_5 load_lvis_json load_voc_instances register_pascal_voc merge_to_panoptic register_coco_panoptic_separated register_coco_instances merge_to_panoptic register_soba_panoptic_separated register_soba_instances load_sem_seg load_soba_json RepeatFactorTrainingSampler InferenceSampler TrainingSampler GroupedBatchSampler ExtentTransform Resize_rotated_box ResizeTransform HFlip_rotated_box TransformGen RandomFlip RandomSaturation check_dtype RandomLighting apply_transform_gens RandomContrast Resize RandomCrop ResizeShortestEdge RandomExtent RandomBrightness convert_c2_detectron_names convert_basic_c2_names align_and_update_state_dicts DetectionCheckpointer Detectron2Handler ModelCatalogHandler ModelCatalog upgrade_config downgrade_config _rename _RenameConverter ConverterV2 ConverterV1 guess_version set_global_cfg get_cfg CfgNode build_batch_data_sampler print_instances_class_histogram build_detection_test_loader filter_images_with_few_keypoints build_detection_train_loader load_proposals_into_dataset worker_init_reset_seed get_detection_dataset_dicts trivial_batch_collator filter_images_with_only_crowd_annotations _quantize MetadataCatalog DatasetCatalog Metadata DatasetFromList MapDataset DatasetMapper check_metadata_consistency annotations_to_instances transform_instance_annotations annotations_to_instances_rotated transform_proposals gen_crop_transform_with_instance build_transform_gen check_image_size create_keypoint_hflip_indices SizeMismatchError filter_empty_instances transform_keypoint_annotations read_image register_all_coco register_all_cityscapes register_all_lvis register_all_pascal_voc _get_coco_instances_meta _get_builtin_metadata _get_coco_panoptic_separated_meta load_cityscapes_instances load_cityscapes_semantic cityscapes_files_to_dict load_sem_seg load_coco_json register_lvis_instances get_lvis_instances_meta _get_lvis_instances_meta_v0_5 load_lvis_json load_voc_instances register_pascal_voc merge_to_panoptic register_coco_panoptic_separated register_coco_instances merge_to_panoptic register_soba_panoptic_separated register_soba_instances load_sem_seg load_coco_json RepeatFactorTrainingSampler InferenceSampler TrainingSampler GroupedBatchSampler ExtentTransform Resize_rotated_box ResizeTransform HFlip_rotated_box TransformGen RandomFlip RandomSaturation check_dtype RandomLighting apply_transform_gens RandomContrast Resize RandomCrop ResizeShortestEdge RandomExtent RandomBrightness default_setup DefaultTrainer DefaultPredictor default_argument_parser PeriodicCheckpointer LRScheduler IterationTimer PreciseBN CallbackHook PeriodicWriter AutogradProfiler EvalHook launch _distributed_worker _find_free_port SimpleTrainer HookBase TrainerBase CityscapesEvaluator _evaluate_box_proposals _evaluate_predictions_on_coco COCOEvaluator instances_to_json DatasetEvaluator DatasetEvaluators inference_on_dataset inference_context _evaluate_predictions_on_lvis _evaluate_box_proposals LVISEvaluator COCOPanopticEvaluator _print_panoptic_results PascalVOCDetectionEvaluator parse_rec voc_eval voc_ap SemSegEvaluator _evaluate_box_proposals instances_to_json _evaluate_predictions_on_soba SOBAEvaluator verify_results print_csv_format flatten_results_dict NaiveSyncBatchNorm FrozenBatchNorm2d AllReduce get_norm ModulatedDeformConv DeformConv _ModulatedDeformConv _DeformConv paste_mask_in_image_old pad_masks _do_paste_mask paste_masks_in_image scale_boxes nms_rotated batched_nms_rotated batched_nms ROIAlign _ROIAlign _ROIAlignRotated ROIAlignRotated pairwise_iou_rotated ShapeSpec _NewEmptyTensorOp Conv2d interpolate BatchNorm2d ConvTranspose2d cat RotatedAnchorGenerator build_anchor_generator _create_grid_offsets BufferList DefaultAnchorGenerator Box2BoxTransformRotated Box2BoxTransform Matcher convert_boxes_to_pooler_format assign_boxes_to_levels ROIPooler detector_postprocess decode takeTwo sem_seg_postprocess rect_distance matchor dist compute_iou compute_direction combine_association box_combine encode subsample_labels DatasetMapperTTA GeneralizedRCNNWithTTA Backbone build_backbone LastLevelMaxPool build_retinanet_resnet_fpn_backbone build_resnet_fpn_backbone FPN LastLevelP6P7 _assert_strides_are_log2_contiguous ResNet ResNetBlockBase BottleneckBlock build_resnet_backbone DeformBottleneckBlock make_stage BasicStem build_model combine_semantic_and_instance_outputs PanopticFPN ProposalNetwork GeneralizedRCNN permute_all_cls_and_box_to_N_HWA_K_and_concat permute_to_N_HWA_K RetinaNet RetinaNetHead SemanticSegmentor SemSegFPNHead build_sem_seg_head build_proposal_generator add_ground_truth_to_proposals add_ground_truth_to_proposals_single_image RPN StandardRPNHead build_rpn_head rpn_losses RPNOutputs find_top_rpn_proposals RRPN RRPNOutputs find_top_rrpn_proposals build_box_head FastRCNNConvFCHead _ScaleGradient CascadeROIHeads LightdirectionOutputLayer FastRCNNOutputs fast_rcnn_inference fast_rcnn_losses fast_rcnn_inference_single_image FastRCNNOutputLayers build_keypoint_head keypoint_rcnn_inference KRCNNConvDeconvUpsampleHead keypoint_rcnn_loss MaskRCNNConvUpsampleHead mask_rcnn_loss build_mask_head mask_rcnn_inference RROIHeads Res5ROIHeads build_roi_heads ROIHeads select_proposals_with_visible_keypoints select_foreground_proposals StandardROIHeads build_lr_scheduler build_optimizer WarmupMultiStepLR WarmupCosineLR _get_warmup_factor_at_iter pairwise_iou matched_boxlist_iou BoxMode Boxes ImageList Instances Keypoints _keypoints_to_heatmap heatmaps_to_keypoints BitMasks rasterize_polygons_within_box PolygonMasks polygons_to_bitmask pairwise_iou RotatedBoxes get_env_module collect_torch_env collect_env_info random_color colormap get_local_size synchronize get_world_size get_local_rank reduce_dict _get_global_gloo_group shared_random_seed all_gather get_rank gather _serialize_to_tensor is_main_process _pad_to_largest_tensor setup_environment setup_custom_environment _configure_libraries _import_file seed_all_rng EventStorage TensorboardXWriter get_event_storage JSONWriter CommonMetricPrinter _cached_log_stream log_first_n _find_caller setup_logger log_every_n create_small_table _ColorfulFormatter Registry PicklableWrapper _DetectedInstance VideoVisualizer VisImage Visualizer GenericMask ColorMode _PanopticPrediction _create_text_labels default_setup DefaultTrainer DefaultPredictor default_argument_parser PeriodicCheckpointer LRScheduler IterationTimer PreciseBN CallbackHook PeriodicWriter AutogradProfiler EvalHook launch _distributed_worker _find_free_port SimpleTrainer HookBase TrainerBase CityscapesEvaluator _evaluate_box_proposals _evaluate_predictions_on_coco COCOEvaluator instances_to_json DatasetEvaluator DatasetEvaluators inference_on_dataset inference_context _evaluate_predictions_on_lvis _evaluate_box_proposals LVISEvaluator COCOPanopticEvaluator _print_panoptic_results PascalVOCDetectionEvaluator parse_rec voc_eval voc_ap RotatedCOCOeval RotatedCOCOEvaluator SemSegEvaluator _evaluate_box_proposals instances_to_json _evaluate_predictions_on_soba SOBAEvaluator verify_results print_csv_format flatten_results_dict CityscapesEvaluator _evaluate_box_proposals _evaluate_predictions_on_coco COCOEvaluator instances_to_json DatasetEvaluator DatasetEvaluators inference_on_dataset inference_context _evaluate_predictions_on_lvis _evaluate_box_proposals LVISEvaluator COCOPanopticEvaluator _print_panoptic_results PascalVOCDetectionEvaluator parse_rec voc_eval voc_ap SemSegEvaluator verify_results print_csv_format flatten_results_dict Caffe2Model add_export_config export_caffe2_model Caffe2KeypointRCNNInference Caffe2RPN Caffe2Compatible Caffe2FastRCNNOutputsInference Caffe2MaskRCNNInference Boxes4or5 Caffe2ROIPooler InstancesList run_and_save_graph _op_stats export_caffe2_detection_model _assign_device_option _export_via_onnx ProtobufDetectionModel ProtobufModel Caffe2GeneralizedRCNN Caffe2RetinaNet set_caffe2_compatible_tensor_mode _cast_to_f32 Caffe2PanopticFPN Caffe2MetaArch assemble_rcnn_outputs_by_name convert_batched_inputs_to_c2_format GenericMixin mock_fastrcnn_outputs_inference ROIHeadsPatcher patch Caffe2CompatibleConverter patch_generalized_rcnn mock_mask_rcnn_inference mock_keypoint_rcnn_inference rename_op_input get_sub_graph_external_input_output to_device IllegalGraphTransformError mock_torch_nn_functional_interpolate save_graph_base identify_reshape_sub_graph alias remove_reshape_for_fc construct_init_net_from_params DiGraph get_params_from_init_net get_pb_arg_valstrings _modify_blob_names _rename_blob group_norm_replace_aten_with_caffe2 _rename_versioned_blob_in_proto get_pb_arg_ints ScopedWS get_pb_arg_vals save_graph get_pb_arg_floats fuse_copy_between_cpu_and_gpu get_pb_arg_vali get_pb_arg_valf _generic_status_identifier onnx_compatibale_interpolate remove_dead_end_ops get_pb_arg BilinearInterpolation check_set_pb_arg infer_device_type fetch_any_blob get_producer_map fuse_alias_placeholder rename_op_output _get_dependency_chain _updater_raise _create_const_fill_op_from_numpy create_const_fill_op get_consumer_map _create_const_fill_op_from_c2_int8_tensor NaiveSyncBatchNorm FrozenBatchNorm2d AllReduce get_norm ModulatedDeformConv DeformConv _ModulatedDeformConv _DeformConv paste_mask_in_image_old pad_masks _do_paste_mask paste_masks_in_image scale_boxes nms_rotated batched_nms_rotated batched_nms ROIAlign _ROIAlign _ROIAlignRotated ROIAlignRotated pairwise_iou_rotated ShapeSpec _NewEmptyTensorOp Conv2d interpolate BatchNorm2d ConvTranspose2d cat RotatedAnchorGenerator build_anchor_generator _create_grid_offsets BufferList DefaultAnchorGenerator Box2BoxTransformRotated Box2BoxTransform Matcher matchor convert_boxes_to_pooler_format assign_boxes_to_levels ROIPooler detector_postprocess decode takeTwo sem_seg_postprocess rect_distance matchor dist compute_iou compute_direction combine_association box_combine encode subsample_labels DatasetMapperTTA GeneralizedRCNNWithTTA Backbone build_backbone LastLevelMaxPool build_retinanet_resnet_fpn_backbone build_resnet_fpn_backbone FPN LastLevelP6P7 _assert_strides_are_log2_contiguous ResNet ResNetBlockBase BottleneckBlock build_resnet_backbone DeformBottleneckBlock make_stage BasicStem build_model LISARCNN combine_semantic_and_instance_outputs PanopticFPN ProposalNetwork GeneralizedRCNN permute_all_cls_and_box_to_N_HWA_K_and_concat permute_to_N_HWA_K RetinaNet RetinaNetHead SemanticSegmentor SemSegFPNHead build_sem_seg_head build_proposal_generator build_proposal_generator build_rpn_head LISARPNHead LISARPN add_ground_truth_to_proposals add_ground_truth_to_proposals_single_image RPN StandardRPNHead build_rpn_head rpn_losses RPNOutputs find_top_rpn_proposals RRPN RRPNOutputs find_top_rrpn_proposals build_box_head FastRCNNConvFCHead _ScaleGradient CascadeROIHeads LightdirectionOutputLayer FastRCNNOutputs fast_rcnn_inference fast_rcnn_losses fast_rcnn_inference_single_image FastRCNNOutputLayers build_keypoint_head keypoint_rcnn_inference KRCNNConvDeconvUpsampleHead keypoint_rcnn_loss RelationROIHeads MaskRCNNConvUpsampleHead mask_rcnn_loss build_mask_head mask_rcnn_inference RROIHeads Res5ROIHeads build_roi_heads ROIHeads select_proposals_with_visible_keypoints select_foreground_proposals StandardROIHeads RROIHeads RotatedFastRCNNOutputs fast_rcnn_inference_single_image_rotated fast_rcnn_inference_rotated get get_checkpoint_url get_config_file _ModelZooUrls build_lr_scheduler build_optimizer WarmupMultiStepLR WarmupCosineLR _get_warmup_factor_at_iter pairwise_iou matched_boxlist_iou BoxMode Boxes ImageList Instances Keypoints _keypoints_to_heatmap heatmaps_to_keypoints BitMasks rasterize_polygons_within_box PolygonMasks polygons_to_bitmask pairwise_iou RotatedBoxes get_env_module collect_torch_env collect_env_info random_color colormap get_local_size synchronize get_world_size get_local_rank reduce_dict _get_global_gloo_group shared_random_seed all_gather get_rank gather _serialize_to_tensor is_main_process _pad_to_largest_tensor setup_environment setup_custom_environment _configure_libraries _import_file seed_all_rng EventStorage TensorboardXWriter get_event_storage JSONWriter CommonMetricPrinter _cached_log_stream log_first_n _find_caller setup_logger log_every_n create_small_table _ColorfulFormatter retry_if_cuda_oom _ignore_torch_cuda_oom Registry PicklableWrapper _DetectedInstance VideoVisualizer VisImage Visualizer GenericMask ColorMode _PanopticPrediction _create_text_labels separate_coco_semantic_from_panoptic link_val100 _process_panoptic_to_semantic get_parser setup_cfg VisualizationDemo AsyncPredictor convert_c2_detectron_names convert_basic_c2_names align_and_update_state_dicts Detectron2Handler ModelCatalogHandler ModelCatalog DetectionCheckpointer Detectron2Handler ModelCatalogHandler ModelCatalog upgrade_config downgrade_config _rename _RenameConverter ConverterV2 ConverterV1 guess_version set_global_cfg get_cfg CfgNode build_batch_data_sampler print_instances_class_histogram build_detection_test_loader filter_images_with_few_keypoints build_detection_train_loader load_proposals_into_dataset worker_init_reset_seed get_detection_dataset_dicts trivial_batch_collator filter_images_with_only_crowd_annotations _quantize MetadataCatalog DatasetCatalog Metadata DatasetFromList MapDataset DatasetMapper check_metadata_consistency annotations_to_instances transform_instance_annotations annotations_to_instances_rotated transform_proposals gen_crop_transform_with_instance build_transform_gen check_image_size create_keypoint_hflip_indices SizeMismatchError filter_empty_instances transform_keypoint_annotations read_image register_all_coco register_all_cityscapes register_all_lvis register_all_pascal_voc _get_coco_instances_meta _get_builtin_metadata _get_coco_panoptic_separated_meta load_cityscapes_instances load_cityscapes_semantic cityscapes_files_to_dict load_sem_seg load_coco_json register_lvis_instances get_lvis_instances_meta _get_lvis_instances_meta_v0_5 load_lvis_json load_voc_instances register_pascal_voc merge_to_panoptic register_coco_panoptic_separated register_coco_instances merge_to_panoptic register_soba_panoptic_separated register_soba_instances load_sem_seg load_soba_json RepeatFactorTrainingSampler InferenceSampler TrainingSampler GroupedBatchSampler ExtentTransform Resize_rotated_box ResizeTransform HFlip_rotated_box TransformGen RandomFlip RandomSaturation check_dtype RandomLighting apply_transform_gens RandomContrast Resize RandomCrop ResizeShortestEdge RandomExtent RandomBrightness convert_c2_detectron_names convert_basic_c2_names align_and_update_state_dicts DetectionCheckpointer Detectron2Handler ModelCatalogHandler ModelCatalog upgrade_config downgrade_config _rename _RenameConverter ConverterV2 ConverterV1 guess_version set_global_cfg get_cfg CfgNode build_batch_data_sampler print_instances_class_histogram build_detection_test_loader filter_images_with_few_keypoints build_detection_train_loader load_proposals_into_dataset worker_init_reset_seed get_detection_dataset_dicts trivial_batch_collator filter_images_with_only_crowd_annotations _quantize MetadataCatalog DatasetCatalog Metadata DatasetFromList MapDataset DatasetMapper check_metadata_consistency annotations_to_instances transform_instance_annotations annotations_to_instances_rotated transform_proposals gen_crop_transform_with_instance build_transform_gen check_image_size create_keypoint_hflip_indices SizeMismatchError filter_empty_instances transform_keypoint_annotations read_image register_all_coco register_all_cityscapes register_all_lvis register_all_pascal_voc _get_coco_instances_meta _get_builtin_metadata _get_coco_panoptic_separated_meta load_cityscapes_instances load_cityscapes_semantic cityscapes_files_to_dict load_sem_seg load_coco_json register_lvis_instances get_lvis_instances_meta _get_lvis_instances_meta_v0_5 load_lvis_json load_voc_instances register_pascal_voc merge_to_panoptic register_coco_panoptic_separated register_coco_instances merge_to_panoptic register_soba_panoptic_separated register_soba_instances load_sem_seg load_coco_json RepeatFactorTrainingSampler InferenceSampler TrainingSampler GroupedBatchSampler ExtentTransform Resize_rotated_box ResizeTransform HFlip_rotated_box TransformGen RandomFlip RandomSaturation check_dtype RandomLighting apply_transform_gens RandomContrast Resize RandomCrop ResizeShortestEdge RandomExtent RandomBrightness default_setup DefaultTrainer DefaultPredictor default_argument_parser PeriodicCheckpointer LRScheduler IterationTimer PreciseBN CallbackHook PeriodicWriter AutogradProfiler EvalHook launch _distributed_worker _find_free_port SimpleTrainer HookBase TrainerBase CityscapesEvaluator _evaluate_box_proposals _evaluate_predictions_on_coco COCOEvaluator instances_to_json DatasetEvaluator DatasetEvaluators inference_on_dataset inference_context _evaluate_predictions_on_lvis _evaluate_box_proposals LVISEvaluator COCOPanopticEvaluator _print_panoptic_results PascalVOCDetectionEvaluator parse_rec voc_eval voc_ap SemSegEvaluator _evaluate_box_proposals instances_to_json _evaluate_predictions_on_soba SOBAEvaluator verify_results print_csv_format flatten_results_dict NaiveSyncBatchNorm FrozenBatchNorm2d AllReduce get_norm ModulatedDeformConv DeformConv _ModulatedDeformConv _DeformConv paste_mask_in_image_old pad_masks _do_paste_mask paste_masks_in_image scale_boxes nms_rotated batched_nms_rotated batched_nms ROIAlign _ROIAlign _ROIAlignRotated ROIAlignRotated pairwise_iou_rotated ShapeSpec _NewEmptyTensorOp Conv2d interpolate BatchNorm2d ConvTranspose2d cat RotatedAnchorGenerator build_anchor_generator _create_grid_offsets BufferList DefaultAnchorGenerator Box2BoxTransformRotated Box2BoxTransform Matcher convert_boxes_to_pooler_format assign_boxes_to_levels ROIPooler detector_postprocess decode takeTwo sem_seg_postprocess rect_distance matchor dist compute_iou compute_direction combine_association box_combine encode subsample_labels DatasetMapperTTA GeneralizedRCNNWithTTA Backbone build_backbone LastLevelMaxPool build_retinanet_resnet_fpn_backbone build_resnet_fpn_backbone FPN LastLevelP6P7 _assert_strides_are_log2_contiguous ResNet ResNetBlockBase BottleneckBlock build_resnet_backbone DeformBottleneckBlock make_stage BasicStem build_model combine_semantic_and_instance_outputs PanopticFPN ProposalNetwork GeneralizedRCNN permute_all_cls_and_box_to_N_HWA_K_and_concat permute_to_N_HWA_K RetinaNet RetinaNetHead SemanticSegmentor SemSegFPNHead build_sem_seg_head build_proposal_generator add_ground_truth_to_proposals add_ground_truth_to_proposals_single_image RPN StandardRPNHead build_rpn_head rpn_losses RPNOutputs find_top_rpn_proposals RRPN RRPNOutputs find_top_rrpn_proposals build_box_head FastRCNNConvFCHead _ScaleGradient CascadeROIHeads LightdirectionOutputLayer FastRCNNOutputs fast_rcnn_inference fast_rcnn_losses fast_rcnn_inference_single_image FastRCNNOutputLayers build_keypoint_head keypoint_rcnn_inference KRCNNConvDeconvUpsampleHead keypoint_rcnn_loss MaskRCNNConvUpsampleHead mask_rcnn_loss build_mask_head mask_rcnn_inference RROIHeads Res5ROIHeads build_roi_heads ROIHeads select_proposals_with_visible_keypoints select_foreground_proposals StandardROIHeads build_lr_scheduler build_optimizer WarmupMultiStepLR WarmupCosineLR _get_warmup_factor_at_iter pairwise_iou matched_boxlist_iou BoxMode Boxes ImageList Instances Keypoints _keypoints_to_heatmap heatmaps_to_keypoints BitMasks rasterize_polygons_within_box PolygonMasks polygons_to_bitmask pairwise_iou RotatedBoxes get_env_module collect_torch_env collect_env_info random_color colormap get_local_size synchronize get_world_size get_local_rank reduce_dict _get_global_gloo_group shared_random_seed all_gather get_rank gather _serialize_to_tensor is_main_process _pad_to_largest_tensor setup_environment setup_custom_environment _configure_libraries _import_file seed_all_rng EventStorage TensorboardXWriter get_event_storage JSONWriter CommonMetricPrinter _cached_log_stream log_first_n _find_caller setup_logger log_every_n create_small_table _ColorfulFormatter Registry PicklableWrapper _DetectedInstance VideoVisualizer VisImage Visualizer GenericMask ColorMode _PanopticPrediction _create_text_labels default_setup DefaultTrainer DefaultPredictor default_argument_parser PeriodicCheckpointer LRScheduler IterationTimer PreciseBN CallbackHook PeriodicWriter AutogradProfiler EvalHook launch _distributed_worker _find_free_port SimpleTrainer HookBase TrainerBase CityscapesEvaluator _evaluate_box_proposals _evaluate_predictions_on_coco COCOEvaluator instances_to_json DatasetEvaluator DatasetEvaluators inference_on_dataset inference_context _evaluate_predictions_on_lvis _evaluate_box_proposals LVISEvaluator COCOPanopticEvaluator _print_panoptic_results PascalVOCDetectionEvaluator parse_rec voc_eval voc_ap RotatedCOCOeval RotatedCOCOEvaluator SemSegEvaluator _evaluate_box_proposals instances_to_json _evaluate_predictions_on_soba SOBAEvaluator verify_results print_csv_format flatten_results_dict CityscapesEvaluator _evaluate_box_proposals _evaluate_predictions_on_coco COCOEvaluator instances_to_json DatasetEvaluator DatasetEvaluators inference_on_dataset inference_context _evaluate_predictions_on_lvis _evaluate_box_proposals LVISEvaluator COCOPanopticEvaluator _print_panoptic_results PascalVOCDetectionEvaluator parse_rec voc_eval voc_ap SemSegEvaluator verify_results print_csv_format flatten_results_dict Caffe2Model add_export_config export_caffe2_model Caffe2KeypointRCNNInference Caffe2RPN Caffe2Compatible Caffe2FastRCNNOutputsInference Caffe2MaskRCNNInference Boxes4or5 Caffe2ROIPooler InstancesList run_and_save_graph _op_stats export_caffe2_detection_model _assign_device_option _export_via_onnx ProtobufDetectionModel ProtobufModel Caffe2GeneralizedRCNN Caffe2RetinaNet set_caffe2_compatible_tensor_mode _cast_to_f32 Caffe2PanopticFPN Caffe2MetaArch assemble_rcnn_outputs_by_name convert_batched_inputs_to_c2_format GenericMixin mock_fastrcnn_outputs_inference ROIHeadsPatcher patch Caffe2CompatibleConverter patch_generalized_rcnn mock_mask_rcnn_inference mock_keypoint_rcnn_inference rename_op_input get_sub_graph_external_input_output to_device IllegalGraphTransformError mock_torch_nn_functional_interpolate save_graph_base identify_reshape_sub_graph alias remove_reshape_for_fc construct_init_net_from_params DiGraph get_params_from_init_net get_pb_arg_valstrings _modify_blob_names _rename_blob group_norm_replace_aten_with_caffe2 _rename_versioned_blob_in_proto get_pb_arg_ints ScopedWS get_pb_arg_vals save_graph get_pb_arg_floats fuse_copy_between_cpu_and_gpu get_pb_arg_vali get_pb_arg_valf _generic_status_identifier onnx_compatibale_interpolate remove_dead_end_ops get_pb_arg BilinearInterpolation check_set_pb_arg infer_device_type fetch_any_blob get_producer_map fuse_alias_placeholder rename_op_output _get_dependency_chain _updater_raise _create_const_fill_op_from_numpy create_const_fill_op get_consumer_map _create_const_fill_op_from_c2_int8_tensor NaiveSyncBatchNorm FrozenBatchNorm2d AllReduce get_norm ModulatedDeformConv DeformConv _ModulatedDeformConv _DeformConv paste_mask_in_image_old pad_masks _do_paste_mask paste_masks_in_image scale_boxes nms_rotated batched_nms_rotated batched_nms ROIAlign _ROIAlign _ROIAlignRotated ROIAlignRotated pairwise_iou_rotated ShapeSpec _NewEmptyTensorOp Conv2d interpolate BatchNorm2d ConvTranspose2d cat RotatedAnchorGenerator build_anchor_generator _create_grid_offsets BufferList DefaultAnchorGenerator Box2BoxTransformRotated Box2BoxTransform Matcher matchor convert_boxes_to_pooler_format assign_boxes_to_levels ROIPooler detector_postprocess decode takeTwo sem_seg_postprocess rect_distance matchor dist compute_iou compute_direction combine_association box_combine encode subsample_labels DatasetMapperTTA GeneralizedRCNNWithTTA Backbone build_backbone LastLevelMaxPool build_retinanet_resnet_fpn_backbone build_resnet_fpn_backbone FPN LastLevelP6P7 _assert_strides_are_log2_contiguous ResNet ResNetBlockBase BottleneckBlock build_resnet_backbone DeformBottleneckBlock make_stage BasicStem build_model LISARCNN combine_semantic_and_instance_outputs PanopticFPN ProposalNetwork GeneralizedRCNN permute_all_cls_and_box_to_N_HWA_K_and_concat permute_to_N_HWA_K RetinaNet RetinaNetHead SemanticSegmentor SemSegFPNHead build_sem_seg_head build_proposal_generator build_proposal_generator build_rpn_head LISARPNHead LISARPN add_ground_truth_to_proposals add_ground_truth_to_proposals_single_image RPN StandardRPNHead build_rpn_head rpn_losses RPNOutputs find_top_rpn_proposals RRPN RRPNOutputs find_top_rrpn_proposals build_box_head FastRCNNConvFCHead _ScaleGradient CascadeROIHeads LightdirectionOutputLayer FastRCNNOutputs fast_rcnn_inference fast_rcnn_losses fast_rcnn_inference_single_image FastRCNNOutputLayers build_keypoint_head keypoint_rcnn_inference KRCNNConvDeconvUpsampleHead keypoint_rcnn_loss RelationROIHeads MaskRCNNConvUpsampleHead mask_rcnn_loss build_mask_head mask_rcnn_inference RROIHeads Res5ROIHeads build_roi_heads ROIHeads select_proposals_with_visible_keypoints select_foreground_proposals StandardROIHeads RROIHeads RotatedFastRCNNOutputs fast_rcnn_inference_single_image_rotated fast_rcnn_inference_rotated get get_checkpoint_url get_config_file _ModelZooUrls build_lr_scheduler build_optimizer WarmupMultiStepLR WarmupCosineLR _get_warmup_factor_at_iter pairwise_iou matched_boxlist_iou BoxMode Boxes ImageList Instances Keypoints _keypoints_to_heatmap heatmaps_to_keypoints BitMasks rasterize_polygons_within_box PolygonMasks polygons_to_bitmask pairwise_iou RotatedBoxes get_env_module collect_torch_env collect_env_info random_color colormap get_local_size synchronize get_world_size get_local_rank reduce_dict _get_global_gloo_group shared_random_seed all_gather get_rank gather _serialize_to_tensor is_main_process _pad_to_largest_tensor setup_environment setup_custom_environment _configure_libraries _import_file seed_all_rng EventStorage TensorboardXWriter get_event_storage JSONWriter CommonMetricPrinter _cached_log_stream log_first_n _find_caller setup_logger log_every_n create_small_table _ColorfulFormatter retry_if_cuda_oom _ignore_torch_cuda_oom Registry PicklableWrapper _DetectedInstance VideoVisualizer VisImage Visualizer GenericMask ColorMode _PanopticPrediction _create_text_labels url_resolver setup autodoc_skip_member default_setup DefaultTrainer DefaultPredictor default_argument_parser get_parser setup_cfg VisualizationDemo AsyncPredictor image_to_byte detection remove_file rgba_to_rgb run main byte_to_image health handle_requests_by_batch getModel get_parser setup_cfg main setup Trainer compute_ap norm_boxes compute_recall apply_box_deltas compute_overlaps compute_iou resize resize_image generate_pyramid_anchors mold_mask generate_anchors compute_ap_range compute_overlaps_masks denorm_boxes unmold_mask download_trained_weights __main__ non_max_suppression minimize_mask resize_mask extract_bboxes trim_zeros compute_matches expand_mask box_refinement Dataset parse_args output setup dataset_ass_id_map create_instances dataset_id_map add_lisa_config LISARCNN LISAROIHeads build_proposal_generator build_rpn_head LISARPNHead LISARPN matchor Params COCOeval encode decode area toBbox _isArrayLike SOBA SOAPeval Params Params COCOeval encode decode area toBbox _isArrayLike SOBA SOAPeval Params TestAnchorGenerator random_rotated_boxes TestBox2BoxTransformRotated random_boxes TestBox2BoxTransform TestBoxIOU TestBoxMode TestCheckpointer TestConfigVersioning TestTransformAnnotations TestTransforms FastRCNNTest rasterize_polygons_with_grid_sample TestMaskCropPaste benchmark_paste iou_between_full_image_bit_masks create_model_input get_empty_instance RetinaNetE2ETest ModelE2ETest MaskRCNNE2ETest get_regular_bitmask_instances get_model_zoo TestModelZoo TestNMSRotated ROIAlignTest ROIAlignRotatedTest ROIHeadsTest TestROIPooler TestRotatedBoxesLayer TestRotatedBoxesStructure benchmark_rotated_iou RPNTest TestGroupedBatchSampler TestVisualizer join format strip readlines strftime getenv dirname abspath append get join format glob dirname abspath append join isdir glob unlink realpath rmtree islink symlink dirname exists deepcopy deepcopy sorted format convert_basic_c2_names zip getLogger tuple shape startswith info keys cat get_missing_parameters_message getLogger tuple warning max values list sorted view tolist shape convert_c2_detectron_names format info keys enumerate error clone get_unexpected_parameters_message len VERSION upgrade clone range VERSION downgrade clone range VERSION format warning getLogger _get _del _set split clear update getLogger format info len getLogger format info len pop sorted format getLogger set info enumerate list sorted copy list tabulate arange format chain min log_first_n extend zip_longest colored zeros sum INFO len GroupedBatchSampler _quantize BatchSampler list check_metadata_consistency thing_classes print_instances_class_histogram filter_images_with_few_keypoints from_iterable filter_images_with_only_crowd_annotations RepeatFactorTrainingSampler build_batch_data_sampler format TrainingSampler IMS_PER_BATCH getLogger get_world_size TRAIN DataLoader MapDataset SAMPLER_TRAIN REPEAT_THRESHOLD get_detection_dataset_dicts info DatasetMapper DatasetFromList len DataLoader MapDataset BatchSampler get_detection_dataset_dicts DatasetMapper DatasetFromList InferenceSampler len randint seed_all_rng pop apply_box convert astype Boxes Instances nonempty XYXY_ABS as_tensor clip XYXY_ABS convert transform_keypoint_annotations apply_coords reshape Keypoints from_polygon_masks Boxes Instances tensor PolygonMasks clip Instances tensor clip RotatedBoxes append nonempty get update check_metadata_consistency keypoint_flip_map dict keypoint_names minimum asarray convert astype maximum randint int32 XYXY_ABS str format getLogger error enumerate str RandomFlip MIN_SIZE_TRAIN_SAMPLING info getLogger MIN_SIZE_TEST MIN_SIZE_TRAIN ResizeShortestEdge MAX_SIZE_TRAIN MAX_SIZE_TEST append get join list items register_coco_instances _get_builtin_metadata register_coco_panoptic_separated register_lvis_instances list join items get_lvis_instances_meta join list format items set _get_builtin_metadata register join register_pascal_voc update _get_coco_instances_meta join format partial getLogger glob map info append Pool len append join glob list asarray isinstance Polygon endswith bounds geoms id buffer difference is_empty nonzero unique append XYXY_ABS union chain warn warning Timer getCatIds sorted list append get getRelaIds format XYWH_ABS loadRela zip loadImgs info seconds keys enumerate join get_local_path loadCats len list sorted format zip warn set file2id info append len register set Timer load_imgs get_local_path format seconds sorted list zip join get set get_lvis_instances_meta startswith info LVIS keys append len join parse findall text append find register set register set register set update append copy register set register set warn warning Timer getAssoIds getCatIds sorted list append loadAsso get format XYWH_ABS zip loadImgs info seconds keys enumerate join get_local_path loadCats len width arctan2 cos h pi w new_w sin new_h check_dtype get_transform apply_image append getuid add_argument hash ArgumentParser str read format collect_env_info hasattr join config_file CUDNN_BENCHMARK setup_logger get_world_size get_rank mkdirs abspath info OUTPUT_DIR seed_all_rng socket bind close AF_INET SOCK_STREAM _find_free_port spawn main_func list init_process_group new_group synchronize set_device main_func range pred_masks_rle XYWH_ABS has pred_keypoints convert tolist pred_associations pred_light append XYXY_ABS numpy range len arange zeros_like max pairwise_iou Boxes append sum loadAnns range cat getAnnIds mean float proposal_boxes enumerate reshape sort min zeros as_tensor len pop deepcopy evaluate COCOeval summarize accumulate loadRes evaluate_rela loadRes_rela array int time format str evaluate getLogger min timedelta reset info len eval train training get_ann_ids load_anns pop deepcopy format getLogger LVISResults warn get_results create_small_table print_results info LVISEval run append tabulate info int parse findall text append find arange concatenate size maximum sum max range parse_rec cumsum argmax max sum range eps format astype float minimum reshape maximum voc_ap argsort zeros bool array len pop deepcopy SOBAeval evaluate summarize loadRes_asso accumulate loadRes array evaluate_asso join list format items getLogger info str getLogger error exit EXPECTED_RESULTS pformat info abs items list isinstance isinstance arange grid_sample size expand stack device to split int arange chunk _do_paste_mask device ceil tensor to zeros len fromarray max uint8 min from_numpy resize zeros to numpy array float new_zeros zeros_like nms view size tolist new_zeros nms_rotated min clone to max _output_size tuple meshgrid reshape arange NAME clamp sqrt log2 floor epsilon cat cat pred_boxes has Instances paste_masks_in_image proposal_boxes image_size clip expand min max format items list pred_classes box_combine append numpy enumerate split pred_boxes pred_associations Instances list tolist shape append scores XYWH_ABS float toBbox zeros enumerate int items reshape convert pred_classes XYXY_ABS numpy len int squeeze min numel NAME ShapeSpec enumerate IN_FEATURES OUT_CHANNELS FPN build_resnet_backbone IN_FEATURES OUT_CHANNELS FPN build_resnet_backbone append block_class range convert_frozen_batchnorm STEM_OUT_CHANNELS make_stage WIDTH_PER_GROUP max DEFORM_NUM_GROUPS FREEZE_AT RES2_OUT_CHANNELS append freeze range OUT_FEATURES DEPTH RES5_DILATION DEFORM_ON_PER_STAGE BasicStem enumerate NORM STRIDE_IN_1X1 parameters DEFORM_MODULATED NUM_GROUPS META_ARCHITECTURE zeros_like tolist argsort item append to shape reshape permute view view NAME NAME len ones Instances device image_size log cat len HEAD_NAME arange count zip sort min Boxes Instances nonempty enumerate image_sizes device append tensor batched_nms full clip cat len smooth_l1_loss to float32 binary_cross_entropy_with_logits arange count zip sort min RotatedBoxes Instances nonempty batched_nms_rotated enumerate image_sizes device append tensor full clip cat len NAME arange smooth_l1_loss size squeeze numel atan2 device cross_entropy view reshape Boxes Instances nonzero batched_nms clip NAME put_scalar cross_entropy get_event_storage view gt_keypoints squeeze numel shape cat append tensor to to_heatmap len detach heatmaps_to_keypoints zip cat split put_scalar arange get_event_storage size numel binary_cross_entropy_with_logits item append to max cat arange sigmoid zip cat split NAME NAME append squeeze gt_classes put_scalar get_event_storage squeeze numel mean unsqueeze any append tensor WEIGHT_DECAY_NORM endswith WEIGHT_DECAY_BIAS SGD named_parameters BASE_LR BIAS_LR_FACTOR WEIGHT_DECAY LR_SCHEDULER_NAME clamp min area where zeros max max min area clamp long argmax arange view exp_ clamp squeeze new_zeros ceil float sum max range frPyObjects merge from_numpy deepcopy max polygons_to_bitmask str list defaultdict items get_env_module is_available device_count append tabulate range randint len barrier get_world_size format getLogger from_buffer dumps get_rank warning get_backend device to len get_world_size all_gather tensor max zeros cat _serialize_to_tensor _get_global_gloo_group loads zip append max _pad_to_largest_tensor max zip _get_global_gloo_group loads get_rank append _serialize_to_tensor _pad_to_largest_tensor randint all_gather get_world_size seed int set_rng_state format from_bytes get_state getLogger strftime getpid info urandom spec_from_file_location exec_module module_from_spec get int setUseOpenCL get setup_custom_environment _configure_libraries endswith setup_environment _import_file import_module setFormatter join format _cached_log_stream getLogger addHandler StreamHandler Formatter mkdirs _ColorfulFormatter dirname colored DEBUG setLevel f_back _getframe f_code _find_caller log isinstance _find_caller log tuple tabulate zip COCOeval freeze defrost is_frozen CN get_caffe2_inputs C2MetaArch export_caffe2_detection_model property onnx_graph_to_caffe2_net get_available_passes optimize apply get items sorted list infer_device_type _assign_op_device_option get_ssa deepcopy format remove_reshape_for_fc info construct_init_net_from_params get_params_from_init_net fuse_alias_placeholder fuse_copy_between_cpu_and_gpu _assign_device_option remove_dead_end_ops any encode_additional_info _op_stats colored _export_via_onnx group_norm_replace_aten_with_caffe2 tabulate __name__ format save_graph info get arange keypoint_rcnn_inference pred_classes Boxes int64 zeros to apply get from_tensors image_sizes zip append Tensor named_children isinstance ccc RPN patch ROIPooler device int upsample_filt zeros conv_transpose2d warning isinstance is_in_onnx_export FetchBlob arg get_pb_arg get_pb_arg get_pb_arg get_pb_arg get_pb_arg get_pb_arg format extend MakeArgument warning getattr setattr get_pb_arg update data type NetDef list format items isinstance extend warning append type len range enumerate defaultdict append range enumerate len get_ssa get_producer_map deepcopy get_ssa _update_i op reversed zip union deepcopy list _replace_list map output input append partial GetPydotGraph format print write_svg op GetPydotGraphMinimal dirname _modify_blob_names write_png write_pdf makedirs remove format check_set_pb_arg get_pb_arg_vals op get_pb_arg_vali info get_pb_arg decode rename_op_input rename_op_output op extend get_pb_arg_vali append bool enumerate external_output external_input output op zip input range len deepcopy get_ssa get_producer_map rename_op_output _rename_versioned_blob_in_proto get_ssa _rename_versioned_blob_in_proto get_ssa sum format get_all_paths get_producer_map min from_ssa warning get_consumer_map get_ssa op _get_dependency_chain append enumerate join get_ssa format all deepcopy remove get_sub_graph_external_input_output rename_op_output extend info append identify_reshape_sub_graph _fuse_once get_ssa list all extend reversed add set get_consumer_map enumerate view reshape RotatedBoxes Instances batched_nms_rotated nonzero clip replace join resource_filename merge_from_file load build_model get_cfg WEIGHTS get_checkpoint_url get_config_file asarray zeros_like save rgb2id open time format starmap partial print makedirs iter_annotations Pool enumerate join relpath print symlink makedirs merge_from_file confidence_threshold config_file get_cfg merge_from_list opts freeze add_argument ArgumentParser getattr replace add_transform add_config_value connect append get zip run BytesIO rgba_to_rgb open size new convert paste rmtree BytesIO seek save uuid4 str read join basename secure_filename image_to_byte save filename open byte_to_image run_on_image read_image makedirs sleep send_file put str setup_logger setup_cfg set_start_method VisualizationDemo info parse_args merge_from_file config_file get_cfg merge_from_list default_setup opts freeze verify_results update list test_with_TTA setup build_model print resume_or_load test WEIGHTS ENABLED Trainer register_hooks eval_only keys is_main_process zeros array range minimum maximum zeros range compute_iou T astype float32 dot sum astype delete float32 compute_iou append astype float32 astype float32 log dtype min pad resize randint max pad astype resize zeros bool range astype resize zeros bool range zeros bool astype resize arange concatenate reshape flatten sqrt meshgrid array append generate_anchors range len ones trim_zeros compute_overlaps_masks range len arange concatenate cumsum compute_matches astype float32 maximum sum range len compute_ap format print mean append compute_overlaps set argmax max len print array array add_argument ArgumentParser show join format print waitKey imshow save asarray XYWH_ABS convert Boxes Instances XYXY_ABS CN shape sum arange grid_sample from_numpy shape meshgrid to append randn clamp rand Boxes benchmark manual_seed is_available cat merge_from_file get_cfg get_config_file BitMasks rand Boxes Instances to BitMasks rand Boxes Instances to is_available stack benchmark append | # Instance Shadow Detection This repo is implemented on [Detectron2](https://github.com/facebookresearch/detectron2). The original repo is [original repo](https://github.com/stevewongv/InstanceShadowDetection). I made flask api server and web frontend to run the model and get result via http protocol. ## How to run **Run right now!** [](https://ainize.web.app/redirect?git_repo=https://github.com/gmlee329/InstanceShadowDetection) [DEMO](https://master-instance-shadow-detection-gmlee329.endpoint.ainize.ai) **In local** It must need GPU so, [Nvidia-docker](https://github.com/NVIDIA/nvidia-docker) is needed. | 2,192 |
gnhdnb/adjustable-real-time-style-transfer | ['style transfer'] | ['Adjustable Real-time Style Transfer'] | neural_style/neural_style.py neural_style/vgg.py neural_style/transformer_net.py neural_style/utils.py stylize check_paths load_weights stylize_onnx_caffe2 main train ResidualBlock ConditionerNet ContitionalInstanceNorm2d ResidualDense TransformerNet ConvLayer UpsampleConvLayer gram_matrix load_image save_image normalize_batch Vgg19 checkpoint_model_dir save_model_dir makedirs load list model search load_state_dict keys style_transform zero_grad relu2_2 conditioner ImageFolder DataLoader unsqueeze save device dataset ctime seed str style_weight Adam MSELoss epochs transformer style_image normalize_batch load_image to range detach state_dict format replace save_model_dir Compose load_weights lr manual_seed vgg zip checkpoint_model_dir enumerate join backward print gram_matrix parameters cpu step mse_loss content_weight len endswith content_image Compose content_transform stylize_onnx_caffe2 device load_image to output_image save_image load prepare model stylize print add_argument exit add_parser check_paths ArgumentParser parse_args train add_subparsers resize ANTIALIAS open fromarray numpy astype save bmm size transpose view view div_ | # Adjustable Real-time Style Transfer Pytorch implementation of the method described in [Adjustable Real-time Style Transfer](https://arxiv.org/pdf/1811.08560.pdf) This implementation is based heavily on [pytorch style transfer example](https://github.com/pytorch/examples/tree/master/fast_neural_style) <p align="center"> <img src="images/content-images/amber.jpg" height="320px"> <img src="images/output-images/amber-mosaic.gif" height="320px"> </p> | 2,193 |
gnina/libmolgrid | ['data augmentation'] | ['libmolgrid: GPU Accelerated Molecular Gridding for Deep Learning Applications'] | python/torch_bindings.py python/setup.py sphinx/cpp/conf.py test/test_example_provider.py test/test_gridio.py sphinx/python/conf.py test/test_transform.py test/test_grid.py test/test_numpy.py test/test_example_dataset.py test/test_torch_cnn.py test/test_gridinterp.py test/test_gridmaker.py python/__init__.py test/test_typing.py test/test_example.py test/conftest.py test/test_coordinateset.py test/test_torch.py git_pep440_version Coords2GridFunction make_grid_tensor MolDataset BatchedCoords2GridFunction Coords2Grid Grid2CoordsGradientFunction tensor_as_grid make_grid_ndarray tonumpy pytest_configure test_coordset_from_mol_vec test_coordset_merge test_coordset_from_array test_coordset_from_mol test_examplevec test_example_merge test_example_dataset test_type_sizing test_grouped_example_provider test_cached_with_typer_example_provider test_pytorch_dataset test_duplicated_examples test_copied_examples test_custom_typer_example_provider test_gnina_example_provider test_cached_example_provider test_example_provider_epoch_iteration test_make_vector_types_ex_provider test_mol_example_provider test_vector_sum_types test_example_provider_iterator_interface test_grid scipy_interp test_downsampling test_rotations test_translation test_transforms test_upsampling test_mol_transforms test_dx test_radius_multiples test_a_grid test_type_radii test_backwards test_vector_types_duplicate test_vector_types test_backward_vec test_backward_gradients test_vector_types_mol test_devices test_clear test_numpy_conv test_tonumpy test_numpy test_mgrid_copyto_tensor_cuda test_mgrid_copyfrom_tensor_cuda test_mgrid_copyto_tensor test_mgrid_copyfrom_tensor test_torch_gnina_example_provider test_batched_function test_coords2grid test_function test_train_torch_cnn eqQ tup neqQ test_apply_transform test_numpy_apply_transform test_random_transform test_gninatyping test_nulltyping test_subset_elementtyping test_filemap_gninatyping test_defaultgninatyping test_gninavector_typing test_filemap_elementtyping test_elementtyping test_callbackvector_typing test_callbacktyping git_command DoubleTensor globals isinstance FloatTensor max_type grid_dimensions zeros forward get_gpu_enabled shape empty type getattr copyTo zeros forward max_type grid_dimensions addinivalue_line CoordinateSet collect readstring make3D range addh rss CoordinateSet readstring get_type_radii make_vector_types make3D addh CoordinateSet Transform Quaternion clone float32 tonumpy forward array CoordinateSet readstring float32 make_vector_types make3D zeros copyTo addh CoordinateSet ElementIndexTyper readstring concatenate Example make_vector_types make3D append merge_coordinates addh CoordinateSet ElementIndexTyper readstring ExampleVec Example make_vector_types make3D append addh ExampleDataset populate ExampleProvider tonumpy populate ExampleProvider next_batch populate ElementIndexTyper ExampleProvider MGrid2f extract_label MGrid1f num_labels extract_labels cpu next_batch range array gpu populate ExampleProvider MGrid2f extract_label MGrid1f num_labels extract_labels cpu next_batch range gpu populate ExampleProvider next_batch populate ElementIndexTyper ExampleProvider testprovider populate ExampleProvider num_types forward grid_dimensions tonumpy GridMaker NullIndexTyper defaultGninaLigandTyper MGrid5f next_batch merge_coordinates populate ExampleProvider next_batch populate ExampleProvider num_types MGrid2f assert_allclose float32 shape tonumpy numpy NullIndexTyper defaultGninaLigandTyper zeros next_batch empty populate sum_types ExampleProvider center Transform forward assert_allclose tonumpy next_batch sum range populate ExampleProvider MGrid2f assert_allclose num_labels extract_labels cpu next_batch populate enumerate num_types DataLoader forward cuda populate GridMaker iter next range MolDataset tonumpy grid_dimensions MGrid5f assert_allclose ExampleProvider MGrid4f cpu merge_coordinates gpu ExampleProvider next_batch sum range populate ExampleProvider src add set get_small_epoch_num get_large_epoch_num populate Grid2f clone MGrid2f cpu Transform reshape MGrid4f gpu copyFrom cpu GridInterpolater forward range int RegularGridInterpolator MGrid2f backward invalues reshape copyFrom tonumpy linspace meshgrid zeros range Transform reshape MGrid4f gpu copyFrom cpu GridInterpolater forward range Transform reshape MGrid4f copyFrom tonumpy set_translation fill_zero cpu GridInterpolater forward gpu assert_allclose scipy_interp Transform reshape MGrid4f range copyFrom tonumpy fill_zero cpu GridInterpolater forward gpu assert_allclose scipy_interp Transform reshape MGrid4f set_rotation_center range copyFrom fill_zero cpu GridInterpolater forward gpu assert_allclose ExampleProvider center Transform forward assert_allclose MGrid4f clone get_resolution tonumpy GridMaker max_type get_dimension grid_dimensions cpu GridInterpolater next gpu populate num_types tuple forward populate read_dx_grids GridMaker get_type_names next write_dx center tonumpy grid_dimensions ExampleProvider read_dx remove MGrid4f get_resolution write_dx_grids assert_array_almost_equal cpu ExampleProvider center num_types Transform forward tuple MGrid4f GridMaker max_type make_ndarray grid_dimensions cpu zeros next gpu populate make_tensor ExampleProvider set_gpu_device num_types forward GridMaker grid_dimensions zeros next populate Grid2f list CoordinateSet arange exp abs MGrid4f float32 range Grid1f tonumpy GridMaker grid_dimensions cpu forward array gpu assert_allclose CoordinateSet Transform MGrid2f backward print MGrid4f float32 flatten tonumpy GridMaker grid_dimensions cpu forward array gpu assert_allclose Grid2f CoordinateSet MGrid2f backward MGrid4f float32 Grid1f tonumpy GridMaker fill_zero grid_dimensions cpu forward array gpu assert_allclose num_types forward populate MGrid2f ones GridMaker next set_radii_type_indexed center size tonumpy grid_dimensions assert_allclose ExampleProvider backward MGrid4f float32 copyFrom cpu merge_coordinates gpu ExampleProvider num_types assert_allclose numpy GridMaker grid_dimensions zeros next_batch forward populate GninaVectorTyper CoordinateSet MGrid2f backward MGrid4f float32 tonumpy GridMaker grid_dimensions cpu array gpu assert_allclose Grid2f CoordinateSet MGrid2f backward MGrid4f float32 Grid1f make_vector_types tonumpy GridMaker fill_zero grid_dimensions cpu forward array gpu assert_allclose exp arange ones apply GridMaker zeros float sum range Grid2f MGrid2f reshape MGrid3d copyFrom Grid3d copyTo MGrid2f reshape copyFrom MGrid3d copyTo tonumpy MGrid1d range Grid1d rand copyFrom MGrid3d fill_zero MGrid2f FloatTensor MGrid3d DoubleTensor copyTo range MGrid2f FloatTensor MGrid3d DoubleTensor copyTo range MGrid2f FloatTensor MGrid3d copyFrom DoubleTensor range MGrid2f FloatTensor MGrid3d copyFrom DoubleTensor range ExampleProvider extract_label num_labels extract_labels dirname zeros next_batch range populate backward apply GridMaker grid_dimensions tensor zeros backward apply GridMaker grid_dimensions tensor zeros num_types zeros_like randn get_type_radii MGrid1f mse_loss unsqueeze tensor Parameter list MGrid2f GridMaker tile grid_dimensions Coords2Grid backward print float32 copyFrom c2grid numpy array len num_types model zero_grad SGD set_random_seed forward populate seed extract_label apply GridMaker dirname append to range cross_entropy mean grid_dimensions manual_seed float ExampleProvider backward parameters zeros next_batch step y x z eqQ Transform neqQ set_random_seed float3 get_quaternion Transform MGrid2f backward Quaternion sqrt float3 forward range Transform backward Quaternion float32 sqrt float3 zeros forward array range list readstring GninaIndexTyper get_type_names addh list readstring defaultGninaReceptorTyper get_type_names defaultGninaLigandTyper addh list readstring ElementIndexTyper get_type_names addh NullIndexTyper addh readstring list readstring SubsettedElementTyper get_type_names addh FileMappedGninaTyper list readstring dirname get_type_names addh list readstring FileMappedElementTyper dirname get_type_names addh list readstring PythonCallbackIndexTyper get_type_names addh list readstring get_type_names addh GninaVectorTyper list readstring PythonCallbackVectorTyper get_type_names addh | libmolgrid ==========  [](https://codecov.io/gh/gnina/libmolgrid) libmolgrid is under active development, but should be suitable for use by early adopters. If you use libmolgrid in your research, please cite: **libmolgrid: Graphics Processing Unit Accelerated Molecular Gridding for Deep Learning Applications.** J Sunseri, DR Koes. *Journal of Chemical Information and Modeling*, 2020 [arxiv](https://arxiv.org/pdf/1912.04822.pdf) ``` @article{sunseri2020libmolgrid, title={libmolgrid: Graphics Processing Unit Accelerated Molecular Gridding for Deep Learning Applications}, | 2,194 |
godisloveforme/instrumentClassifer | ['data augmentation'] | ['Kapre: On-GPU Audio Preprocessing Layers for a Quick Implementation of Deep Neural Network Models with Keras'] | clean.py train.py models.py predict.py eda.py downsample_mono split_wavs test_threshold envelope check_dir save_sample envelop plot_mfccs plot_fbank plot_logmel plot_signals calc_fft get_log_mel_spectrogram_vector plot_fft Conv2D Conv1D LSTM make_prediction train DataGenerator append abs max apply int16 read T resample astype float32 to_mono join str format write exists mkdir join delta_time format int arange dst_root glob check_dir downsample_mono sr tqdm src_root zeros envelope listdir enumerate save_sample show str format use threshold plot print glob downsample_mono grid sr fn title legend src_root envelope subplots set_title plot suptitle set_visible range subplots set_title plot suptitle set_visible range subplots set_title suptitle imshow set_visible range subplots set_title suptitle imshow set_visible range subplots set_title suptitle imshow set_visible range rfftfreq rfft abs len mean abs append apply T log10 melspectrogram zeros range epsilon len Model Input compile Model Input compile Model Input compile concatenate sr LabelEncoder flatten save argmax sorted load_model model_fn dt append envelope src_dir fit_transform range predict format glob mean pred_fn listdir enumerate int join print reshape downsample_mono tqdm zeros array delta_time batch_size warn LabelEncoder src_root sorted model_type train_test_split format DataGenerator glob set listdir join sample_rate fit CSVLogger transform ModelCheckpoint len | # Audio-Classification (Kapre Version) Pipeline for prototyping audio classification algorithms with TF 2  <!-- TOC --> - [YouTube](#youtube) - [Environment](#environment) - [Jupyter Notebooks](#jupyter-notebooks) - [Audio Preprocessing](#audio-preprocessing) - [Training](#training) - [Plot History](#plot-history) | 2,195 |
golsun/StyleFusion | ['response generation', 'style transfer'] | ['Structuring Latent Spaces for Stylized Response Generation'] | src/decode.py src/dataset.py src/vis.py src/tf_lib.py src/main.py src/classifier.py src/model.py data/arXiv/arxiv.py src/shared.py norm_sentence arxiv_clean arxiv_paragraph_all arxiv_utts_all arxiv_paragraph arxiv_filter arxiv_del_bib arxiv_utts arxiv_pandoc ClassifierNgramEnsemble score_file load_classifier ClassifierNgram is_word Classifier1gramCount txt2ww ClassifierNeural clf_eval clf_interact Dataset load_vocab infer_rank Decoder infer parse_infer_args infer_comb rank_nbest repetition_penalty remove_duplicate_unfished run_master get_model_fld _cross_inner _absdiff_dist_v1 Seq2Seq VanillaMTask _dec_loss convert_model_vocab _params Seq2SeqBase LossHistory ModelBase write_log StyleFusion _dec_loss_u _absdiff_dist _interp _add_noise _dist_1nn _relative_dist euc_dist reset_rand rand_latent now calc_nltk_bleu int2str calc_nltk_bleu_smoothed str2bool strmap makedirs dist_mat clusters plot_multiple plot_history interp angel_hist cos_sim join replace chr TweetTokenizer strip lower sub find append tokenize split append strip chr open communicate print arxiv_del_bib listdir Popen append strip lines2paragraph open append replace split endswith listdir print arxiv_paragraph norm_sentence chr replace lower open sub append split sorted endswith print arxiv_utts append listdir join lower open sub split append range len isalpha endswith print eval input load_classifier print lower load_classifier open append split print load_classifier mean append predict open dict strip enumerate get atleast_2d PriorityQueue replace evaluate strip min repetition_penalty put classifiers split append ravel txt2seq range predict len split append linspace startswith rand_latent sorted atleast_2d infer reset_rand seq2txt zip txt2seq enumerate predict remove_duplicate_unfished list tuple dict append range len int startswith split float open get str atleast_2d evaluate print clf_names reset_rand now infer remove_duplicate_unfished rank_nbest append txt2seq predict len fld path_test strip exists open str restore infer_rank sorted get_vali_data load_classifier append input debug Master vali lower load_weights eval startswith print_results Seq2SeqLM keys join int print get_model_fld parse_infer_args summary train Dataset split str join replace debug stddev lower noisy_vocab lr startswith data_name append reld fld_suffix print flush reshape maximum reduce_sum sqrt reduce_mean eye tile expand_dims squared_difference minimum sqrt_mse print _dist_1nn split _cross_inner _cross_inner _cross_inner categorical_crossentropy cast int32 split random_normal tile reshape random_uniform get str load savez print load_vocab files keys dict shape item zeros max exists values seed isinstance normal sum sqrt power euc_dist zeros range show evaluate plot print ones text xlabel add_subplot ylabel subplots_adjust shape title savefig linspace figure feed_data append predict subplots set_yticklabels dist_mat linspace tick_top show list sorted colorbar imshow ylim title savefig append sum fit_transform range predict replace plot concatenate set_xticklabels close shuffle sqrt nan keys int print set_yticks dict hist set_xticks ravel len show load_1toN_data print reshape range title hist savefig append ravel cos_sim predict len subplots replace plot isinstance set_title MA split range read_log enumerate len show int subplots print grid plot_history ceil range len | # StyleFusion code/data for EMNLP'19 paper [Structuring Latent Spaces for Stylized Response Generation](https://arxiv.org/abs/1909.05361). Designed to build a **stylized** dialogue response generator, StyleFusion jointly learns from a conversational dataset and other formats of text (e.g., non-parallel, non-conversational stylized text dataset). In our EMNLP 2019 paper, we demonstrated its use to generate response in style of **Sherlock Holmes** and **arXiv**. StyleFusion is a generalized version of our previous work [SpaceFusion](https://github.com/golsun/SpaceFusion). More documents: * our EMNLP'19 [paper](https://arxiv.org/abs/1909.05361) and [poster](https://github.com/golsun/StyleFusion/blob/master/EMNLP%20poster.pdf). * A nice [introduction](https://mp.weixin.qq.com/s/rtAra15Qqnz9bLadSUSAlg) of our work (not official, by Shannon.AI, in Chinese) ## Dataset In our paper, we trained the model using the following three datasets. * **Reddit**: the conversational dataset (`base_conv`), can be generated using this [script](https://github.com/golsun/SpaceFusion/tree/master/data#multi-ref-reddit). * **Sherlock Holmes**, one of style dataset (`bias_nonc`), avaialble [here](https://github.com/golsun/StyleFusion/tree/master/data/Holmes) | 2,196 |
gombru/TextFCN | ['scene text recognition'] | ['Improving Text Proposals for Scene Images with Fully Convolutional Networks'] | surgery.py draw_net_structure.py compute_heatmaps.py layers.py voc-fcn8s-atonce/solve.py voc-fcn8s-atonce/net.py infer.py score.py COCODataLayer VOCSegDataLayer SBDDSegDataLayer SYNTHTEXTDataLayer ICDARDataLayer seg_tests compute_hist do_seg_tests fast_hist transplant upsample_filt expand_score interp fcn conv_relu make_net max_pool fromarray join uint8 channels astype mkdir save zeros forward print now do_seg_tests iter share_with net format print channels now compute_hist nanmean sum diag print shape params range flat len data num print shape upsample_filt Convolution data Python pool2 SoftmaxWithLoss score relu2_2 relu3_2 relu2_1 relu1_2 fuse_pool4 score_pool3c conv_relu relu5_1 relu7 pool1 fuse_pool3 drop7 scale_pool3 relu5_3 relu4_2 score_pool3 pool4 relu6 relu5_2 drop6 Convolution Deconvolution scale_pool4 score_pool4 relu4_3 score_pool4c Eltwise score_fr label relu1_1 crop relu4_1 NetSpec Scale max_pool dict relu3_1 relu3_3 pool5 upscore2 pool3 upscore8 upscore_pool4 Dropout | gombru/TextFCN | 2,197 |
gonenhila/gender_bias_lipstick | ['word embeddings'] | ['Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them'] | source/save_embeds.py main load_embeddings format print load_embeddings infile save parse_args list print loadtxt len zip enumerate split | # Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them
This project includes the experiments described in the [paper](https://arxiv.org/pdf/1903.03862.pdf):
**"Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them"**, Hila Gonen and Yoav Goldberg, NAACL 2019.
Full reimplementation of the experiments is available in "remaining_bias_2016.ipynb" for Bolukbasi's embeddings, and in "remaining_bias_2018.ipynb" for Zhao's embeddings.
## Prerequisites
| 2,198 |
gonenhila/grammatical_gender | ['word embeddings'] | ['How does Grammatical Gender Affect Noun Representations in Gender-Marking Languages?'] | source/create_pairs_word2vecf.py source/save_embeds.py normalize load_embeddings_from_w2vf load_and_normalize print load norm apply_along_axis print normalize load_embeddings_from_w2vf | # How does Grammatical Gender Affect Noun Representations in Gender-Marking Languages?
This project includes the experiments described in the [paper](https://www.aclweb.org/anthology/K19-1043/):
**"How does Grammatical Gender Affect Noun Representations in Gender-Marking Languages?"**
Hila Gonen, Yova Kementchedjhieva and Yoav Goldberg, CoNLL 2019 (best paper).
## Prerequisites
* [word2vecf](https://github.com/BIU-NLP/word2vecf)
| 2,199 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.