repo
stringlengths 8
116
| tasks
stringlengths 8
117
| titles
stringlengths 17
302
| dependencies
stringlengths 5
372k
| readme
stringlengths 5
4.26k
| __index_level_0__
int64 0
4.36k
|
---|---|---|---|---|---|
Rikorose/clc-dns-challenge-2020 | ['speech enhancement'] | ['CLC: Complex Linear Coding for the DNS 2020 Challenge'] | clcnet-dns2020/enhance_jit.py clcnet-dns2020/model_export_jit.py enhance_jit worker_fn resample worker_init load_audio ExponentialDecay ExponentialUpdate export_jit_frame complex_mul CLCNetStep filterwarnings print read resample zeros_like print squeeze init_buffers numel mean eval pad item ceil load str int set_num_threads item join basename enhance_jit output_folder model print write sr verbose numpy load_audio empty_like zeros_like model init_buffers trace_module eval save item uniform_ fft_step ifft_step | # CLCNet results for the DNS Challenge 2020 Implementation for the Paper [CLC: Complex Linear Coding for the DNS 2020 Challenge](https://arxiv.org/abs/2006.13077). To run this model on some noisy audio files, use the python script `clcnet-dns2020/enhance_jit.py` and a model file in `models`: ```py python enhance_jit.py models/clc.pt <input_noisy_dir> <output_enhanced_dir> ``` Citation: ```bibtex @misc{schrter2020clc, | 900 |
RingBDStack/ELCO | ['graph generation', 'graph attention'] | ['Heuristic Semi-Supervised Learning for Graph Generation Inspired by Electoral College'] | ELCO-GCN(run first)/gcn/models.py ELCO-GCN(run first)/gcn/metrics.py ELCO-GCN(run first)/setup.py ELCO-GAT/gat/train.py ELCO-GCN(run first)/gcn/train.py ELCO-GCN(run first)/gcn/layers.py ELCO-GAT/gat/utils.py ELCO-GCN(run first)/gcn/inits.py ELCO-GAT/gat/layers.py ELCO-GAT/gat/models.py ELCO-GCN(run first)/gcn/utils.py ELCO-GCN(run first)/gcn/__init__.py SpecialSpmmFunction SpGraphAttentionLayer GraphAttentionLayer SpecialSpmm SpGAT GAT train compute_test normalize_adj normalize_features accuracy load_data encode_onehot ones zeros uniform glorot get_layer_uid sparse_dropout dot GraphConvolution Dense Layer masked_softmax_cross_entropy masked_accuracy Model GCN MLP evaluate preprocess_features normalize_adj sparse_to_tuple sample_mask construct_feed_dict chebyshev_polynomials parse_index_file load_data save_file preprocess_adj normalize_features load_file time format model backward print nll_loss zero_grad accuracy eval item step format model print nll_loss accuracy eval item get list map set array genfromtxt list normalize_adj todense FloatTensor csr_matrix multiply shape normalize_features range format LongTensor close coo_matrix print reshape eye encode_onehot array flatten sum array diags diags flatten dot sum array sum type_as double random_uniform sqrt random_uniform sparse_retain floor cast sparse_tensor_dense_matmul matmul softmax_cross_entropy_with_logits cast float32 argmax cast equal run time construct_feed_dict append int strip open zeros close close from_dict_of_lists tuple parse_index_file vstack accuracy_score max EgoNetSplitter str sorted tolist map GradientBoostingClassifier save_file append predict lil_matrix Graph set adjacency_matrix predict_proba zip get_memberships set_params connected_components tolil sort min fit sample_mask index zeros load_file len to_tuple range isinstance len diags flatten dot sum array coo_matrix normalize_adj eye dict update list normalize_adj format chebyshev_recurrence print eye append range eigsh | # ELCO a Heuristic Semi-supervised Learning Framework | 901 |
RingoS/sentiment-review-summary | ['sentiment analysis'] | ['Making the Best Use of Review Summary for Sentiment Analysis'] | functions.py processing_data.py param/config_toy.py models.py param/config_sports.py param/config_movie.py main.py test adjust_lr _init_fn BiLSTM_centric_model LayerNorm mean_pooling BiLSTM_centric_layer convert_text_to_index construct_word_embedding parse tokenize_data corpus2ids batchify load_amazon_review text2index load_dataset extract_raw_sum MixedDataset load_txt_folder model len dict eval to argmax enumerate seed param_groups cat enumerate list pad_sequence map tensor max print join getcwd glob from_pretrained TOY_FILE GLOVE_PATH FILE_PATH DataLoader load_txt_folder open FloatTensor corpus2ids append dump format SPORTS_FILE MOVIE_FILE MixedDataset enumerate load time tokenize_data print dict isfile zeros extract_raw_sum construct_word_embedding len open append parse dict int load_amazon_review append dict enumerate append len clear deepcopy print append enumerate deepcopy dict append tokenize enumerate insert convert_tokens_to_ids append tokenize enumerate sent_tokenize dict text2index append enumerate | # sentiment-review-summary Code for our COLING-2020 paper ["Making the Best Use of Review Summary for Sentiment Analysis"](https://www.aclweb.org/anthology/2020.coling-main.15.pdf) ### Before running the code - Our experiments were conducted under `pytorch==1.0.1` and `cudatoolkit==9.0`, with `python==3.6`. - Some other required packages: tensorboardX, pickle, nltk, numpy. - SNAP Amazon Review dataset can be obtained from [this url](http://snap.stanford.edu/data/web-Amazon.html). (We cannot provide the data in our repo due to copyright issue. ) - The paths of the dataset and the embedding file (we use GloVe in our experiments) in the config file need to be filled in before running the code. ### Run the code `python main.py --config config_toy` - /param/config_toy.py is just an example config file. You may create your own config file. | 902 |
RintaroKanada/efficient-gan-chainer | ['anomaly detection'] | ['Efficient GAN-Based Anomaly Detection'] | efficient_gan.py efficient_gan_train.py efficient_gan_test.py test_efficient_gan train_efficient_gan_labeled data call_feature to_cpu Variable reshape abs print vstack enc gen sum to_gpu range permutation zero_grads save fromarray str setup ones Adam gen range sigmoid_cross_entropy update get dis save_hdf5 astype uint8 backward Variable reshape print float32 enc zeros to_gpu | # efficient-gan-chainer https://arxiv.org/abs/1802.06222 efficient-gan implementation by chainer ## Description efficient-gan implementation by chainer. you can test efficient-gan with mnist dataset. ## Requirement - python3+ - chainer1.24.0 ## Usage You can train efficient-gan by selected number as correct value. | 903 |
Rintarooo/VRP_DRL_MHA | ['combinatorial optimization'] | ['Attention, Learn to Solve Routing Problems!'] | TensorFlow2/baseline.py makegif.py PyTorch/config.py TensorFlow2/decoder.py TensorFlow2/layers.py TensorFlow2/decoder_utils_backup.py TensorFlow2/config.py PyTorch/model.py PyTorch/baseline.py TensorFlow2/data.py TensorFlow2/encoder.py TensorFlow2/model.py PyTorch/layers.py PyTorch/plot.py PyTorch/data.py TensorFlow2/plot.py PyTorch/encoder.py PyTorch/dist_matrix.py PyTorch/decoder_utils.py PyTorch/decoder.py PyTorch/plot_2opt.py TensorFlow2/train.py PyTorch/train.py TensorFlow2/decoder_utils.py load_model RolloutBaseline Config test_parser arg_parser train_parser load_pkl dump_pkl generate_data Generator data_from_txt DecoderCell Env CategoricalSampler TopKSampler Sampler get_dist get_dist_matrix GraphAttentionEncoder Normalization SelfAttention ResidualBlock_BN EncoderLayer DotProductAttention MultiHeadAttention AttentionModel plot_route get_clean_path opt2 get_clean_path plot_route improve_opt2 get_sum_dist train copy_model load_model rollout RolloutBaseline Config test_parser arg_parser load_pkl file_parser dump_pkl generate_data data_from_txt DecoderCell Env CategoricalSampler TopKSampler Sampler Env CategoricalSampler TopKSampler Sampler EncoderLayer SelfAttention GraphAttentionEncoder ResidualBlock_BN DotProductAttention MultiHeadAttention AttentionModel plot_route get_clean_path train is_available load load_state_dict AttentionModel parse_args add_argument ArgumentParser Config parse_args add_argument ArgumentParser parse_args add_argument ArgumentParser manual_seed Tensor isinstance get_dist float round range len append insert show sum concatenate insert get_clean_path sqrt diff Layout append numpy Scatter Figure enumerate range len list get_dist_matrix insert concatenate opt2 extend get_clean_path get_sum_dist append numpy range len eval_all n_heads clip_grad_norm_ zero_grad DataLoader embed_dim save device Generator RolloutBaseline Adam epochs islogger to range wp_epochs state_dict weight_dir n_customer warmup_beta batch enumerate task time n_encode_layers backward print batch_steps rein_loss parameters tanh_clipping AttentionModel n_rollout_samples step epoch_callback generate_data assign AttentionModel zip variables new_model batch generate_data load_weights model_loaded batch model write TensorArray tqdm batch enumerate parse_args add_argument ArgumentParser from_non_deterministic_state from_seed print trainable_variables clip_by_global_norm save_weights reset_states apply_gradients grad_func n_samples generate_data zip Mean update_state allocate_memory | # CVRP solver with Multi-Head Attention TensorFlow2 and PyTorch implementation of `ATTENTION, LEARN TO SOLVE ROUTING PROBLEMS!`(Kool et al. 2019)(https://arxiv.org/pdf/1803.08475.pdf) <img src="https://user-images.githubusercontent.com/51239551/88506411-cd450f80-d014-11ea-84eb-12e7ab983780.gif" width="650"/> <img src="https://user-images.githubusercontent.com/51239551/88507610-bfdd5480-d017-11ea-99de-e9850e6be0db.gif" width="650"/> <img src="https://user-images.githubusercontent.com/51239551/89150677-0ee83400-d59a-11ea-90ed-2852dc1ddd4b.gif" width="650"/> ## Description [Slide Share -- CVRP solver with Multi Heads Attention --](https://www.slideshare.net/RINTAROSATO4/cvrp-solver-with-multihead-attention) ## Dependencies * Python >= 3.6 * TensorFlow >= 2.0 | 904 |
Rintarooo/VRP_MHA | ['combinatorial optimization'] | ['Attention, Learn to Solve Routing Problems!'] | TensorFlow2/baseline.py makegif.py PyTorch/config.py TensorFlow2/decoder.py TensorFlow2/layers.py TensorFlow2/decoder_utils_backup.py TensorFlow2/config.py PyTorch/model.py PyTorch/baseline.py TensorFlow2/data.py TensorFlow2/encoder.py TensorFlow2/model.py PyTorch/layers.py PyTorch/plot.py PyTorch/data.py TensorFlow2/plot.py PyTorch/encoder.py PyTorch/dist_matrix.py PyTorch/decoder_utils.py PyTorch/decoder.py PyTorch/plot_2opt.py TensorFlow2/train.py PyTorch/train.py TensorFlow2/decoder_utils.py load_model RolloutBaseline Config test_parser arg_parser train_parser load_pkl dump_pkl generate_data Generator data_from_txt DecoderCell Env CategoricalSampler TopKSampler Sampler get_dist get_dist_matrix GraphAttentionEncoder Normalization SelfAttention ResidualBlock_BN EncoderLayer DotProductAttention MultiHeadAttention AttentionModel plot_route get_clean_path opt2 get_clean_path plot_route improve_opt2 get_sum_dist train copy_model load_model rollout RolloutBaseline Config test_parser arg_parser load_pkl file_parser dump_pkl generate_data data_from_txt DecoderCell Env CategoricalSampler TopKSampler Sampler Env CategoricalSampler TopKSampler Sampler EncoderLayer SelfAttention GraphAttentionEncoder ResidualBlock_BN DotProductAttention MultiHeadAttention AttentionModel plot_route get_clean_path train is_available load load_state_dict AttentionModel parse_args add_argument ArgumentParser Config parse_args add_argument ArgumentParser parse_args add_argument ArgumentParser manual_seed Tensor isinstance get_dist float round range len append insert show sum concatenate insert get_clean_path sqrt diff Layout append numpy Scatter Figure enumerate range len list get_dist_matrix insert concatenate opt2 extend get_clean_path get_sum_dist append numpy range len eval_all n_heads clip_grad_norm_ zero_grad DataLoader embed_dim save device Generator RolloutBaseline Adam epochs islogger to range wp_epochs state_dict weight_dir n_customer warmup_beta batch enumerate task time n_encode_layers backward print batch_steps rein_loss parameters tanh_clipping AttentionModel n_rollout_samples step epoch_callback generate_data assign AttentionModel zip variables new_model batch generate_data load_weights model_loaded batch model write TensorArray tqdm batch enumerate parse_args add_argument ArgumentParser from_non_deterministic_state from_seed print trainable_variables clip_by_global_norm save_weights reset_states apply_gradients grad_func n_samples generate_data zip Mean update_state allocate_memory | # CVRP solver with Multi-Head Attention TensorFlow2 and PyTorch implementation of `ATTENTION, LEARN TO SOLVE ROUTING PROBLEMS!`(Kool et al. 2019)(https://arxiv.org/pdf/1803.08475.pdf) <img src="https://user-images.githubusercontent.com/51239551/88506411-cd450f80-d014-11ea-84eb-12e7ab983780.gif" width="650"/> <img src="https://user-images.githubusercontent.com/51239551/88507610-bfdd5480-d017-11ea-99de-e9850e6be0db.gif" width="650"/> <img src="https://user-images.githubusercontent.com/51239551/89150677-0ee83400-d59a-11ea-90ed-2852dc1ddd4b.gif" width="650"/> ## Description [Slide Share -- CVRP solver with Multi Heads Attention --](https://www.slideshare.net/RINTAROSATO4/cvrp-solver-with-multihead-attention) ## Dependencies * Python >= 3.6 * TensorFlow >= 2.0 | 905 |
Rithmax/Sub-band-Envelope-Features-Using-Frequency-Domain-Linear-Prediction | ['open set learning'] | ['AP18-OLR Challenge: Three Tasks and Their Baselines'] | Backend_BLSTM_Training/Cavg_computations.py Backend_BLSTM_Training/Tools/Data_Processing_Tools_h5d.py Backend_BLSTM_Training/FDLP_BLSTM_1s.py get_table genererate_test_score Loaddata lid_format_utt Compute_Cavg CountCavg network_initialization read_test_data_long_pred_gen read_test_data_long read_test_data read_data_train read_file_paths shuffle_aligned_list_sar GenSequence_Test_long GenSequence_AVG h5read GenSequence_Test genfromtxt split close write open int exp readlines close len split append range open sum range genfromtxt str print genererate_test_score Loaddata lid_format_utt round log CountCavg makedirs multi_gpu_model concatenate print Model summary Input permutation len shuffle_aligned_list_sar T rstrip len tile append zeros ravel array range split zeros rstrip range len reshape rstrip reshape rstrip | # Sub-band-Envelope-Features-Using-Frequency-Domain-Linear-Prediction-for-Short-Duration-Language-Identification Paper: https://www.isca-speech.org/archive/Interspeech_2018/abstracts/1805.html Cite as: Fernando, S., Sethu, V., Ambikairajah, E. (2018) Sub-band Envelope Features Using Frequency Domain Linear Prediction for Short Duration Language Identification. Proc. Interspeech 2018, 1818-1822, DOI: 10.21437/Interspeech.2018-1805. Implemented with AP17-OLR/ AP18-OLR Database: * AP17-OLR Challenge: Data, Plan, and Baseline (https://arxiv.org/pdf/1706.09742.pdf) * AP18-OLR Challenge (https://arxiv.org/pdf/1806.00616.pdf) Usage: 1) Feature Extraction: fdlp_extraction_AP18_OLR.m 2) Backend BLSTM Training: FDLP_BLSTM_1s.py Backend Reference: Fernando, S., Sethu, V., Ambikairajah, E., Epps, J. (2017) Bidirectional Modelling for Short Duration Language Identification. Proc. Interspeech 2017, 2809-2813, DOI: 10.21437/Interspeech.2017-286. (https://www.isca-speech.org/archive/Interspeech_2017/abstracts/0286.html) | 906 |
RodMech/OSNet-IBN1-Lite | ['person re identification'] | ['Omni-Scale Feature Learning for Person Re-Identification'] | encoder.py OSNet.py utils.py main.py image_handler.py OsNetEncoder ndarray_to_tensor normalize resize Conv1x1 osnet_x0_5 osnet_x1_0 OSNet Conv3x3 Conv1x1Linear ConvLayer ChannelGate osnet_x0_25 LightConv3x3 osnet_ibn_x1_0 OSBlock osnet_x0_75 compress_bytes_image load_checkpoint compress_feature_vector load_pretrained_weights uncompress_string_image fromarray from_numpy transpose dtype unsqueeze_ clone float32 div_ as_tensor type load update items format print load_checkpoint warn OrderedDict load_state_dict startswith append state_dict pack list compress compress decompress reshape fromhex literal_eval | # OSNet-IBN (width x 1.0) Lite ## Contents This project is a reduced version of [OSNet](https://arxiv.org/pdf/1905.00953.pdf)(Omni-Scale Feature Learning for Person Re-Identification), a network designed by Kaiyang Zhou. It has been extracted from [Torxreid](https://github.com/KaiyangZhou/deep-person-reid), the main framework for training and testing reID CNNs. Many features from the original backend have been isolated. Torxvision dependency has been deleted to keep docker container as light as possible. The required methods have been extracted from the source code. The net is able to work both in GPU and CPU. ## Requirements The required packages have been version-pinned in the *requirements.txt*. `torch==1.2` `numpy==1.16.4` `Pillow` | 907 |
Rohit-ChoudharyGit/Train_Fashion_MNIST | ['stochastic optimization'] | ['Adam: A Method for Stochastic Optimization'] | fashion_mnist.py plot_image plot_value_array format xlabel grid imshow xticks argmax max yticks grid bar ylim set_color xticks argmax range yticks | # Train_Fashion_MNIST This is the way i started learning tensorflow, ENJOY ML! # Problem Statement : We have Fashion MNIST dataset which contains 70,000 grayscale images in 10 categories.We need to train it on 60k images to predict the next 10k images result. Starting from the https://www.tensorflow.org/tutorials/ Don't just copy paste the code during learning . Instead read about every word that you encounter, that you don't know.This way we explore the field of ML. There are several blogs written so well to enlighten you on those topic. # For some of the questions that comes in our mind : 1. what is google colab? https://www.kdnuggets.com/2018/02/google-colab-free-gpu-tutorial-tensorflow-keras-pytorch.html | 908 |
RoichatulJannah21/neural-odes-segmentation | ['medical image segmentation', 'semantic segmentation'] | ['Neural Ordinary Differential Equations for Semantic Segmentation of Individual Colon Glands'] | model_utils.py metrics.py dataloader.py models.py train_utils.py augmentations.py inference_utils.py RandomRotationWithMask ElasticTransformations GLaSDataLoader postprocess split_objects remove_small_object inference_image hole_filling_per_object evaluate_image grow_to_fill_borders resize_image crop_result resize_to_size pad_image F1score ObjectDice ObjectHausdorff Hausdorff Dice ConvResUNet LevelBlock ConvResFunc ConvODEFunc Unet ConvODEUNet ConvBlock ODEBlock Conv2dTime Swish get_nonlinearity plot_losses evaluate_image resize_image crop_result array pad_image resize_to_size split_objects remove_small_object hole_filling_per_object grow_to_fill_borders label array eval resize pad max max copy regionprops range maximum_filter binary_fill_holes unique fromarray uint8 size astype resize array uint8 argmin astype delete where any mode unique zeros sum range Hausdorff len uint8 transpose fit astype delete where unique kneighbors max Inf len uint8 astype delete where logical_and flatten any mode unique zeros sum range len uint8 astype delete where shape any mode unique zeros sum range len logical_and show subplots arange plot suptitle cpu transpose imshow legend append numpy nfe len | # Neural Ordinary Differential Equations for Semantic Segmentation of Individual Colon Glands *Accepted to Medical Imaging meets NeurIPS workshop at NeurIPS 2019* Automated medical image segmentation plays a key role in quantitative research and diagnostics. Convolutional neural networks based on the U-Net architecture are the state-of-the-art. A key disadvantage is the hard-coding of the receptive field size, which requires architecture optimization for each segmentation task. Furthermore, increasing the receptive field results in an increasing number of weights. Recently, Neural Ordinary Differential Equations (NODE) have been proposed, a new type of continuous depth deep neural network. This framework allows for a dynamic receptive field at a constant memory cost and a smaller amount of parameters. | 909 |
RomeroBarata/skeleton_based_anomaly_detection | ['outlier detection', 'anomaly detection'] | ['Learning Regularity in Skeleton Trajectories for Anomaly Detection in Videos'] | tbad/gpu.py tbad/combined_model/data.py tbad/losses.py utils/metrics.py tbad/rnn_autoencoder/evaluate.py tbad/rnn_autoencoder/train.py train.py tbad/utils.py tbad/combined_model/evaluate.py tbad/combined_model/fusion.py tbad/rnn_autoencoder/data.py tbad/autoencoder/autoencoder.py evaluate.py tbad/combined_model/message_passing.py visualise.py tbad/eval.py tbad/visualisation.py tbad/autoencoder/data.py tbad/autoencoder/train.py tbad/train.py utils/score_scaling.py tbad/autoencoder/evaluate.py tbad/combined_model/train.py tbad/rnn_autoencoder/rnn.py tbad/data.py main create_arg_parser main create_arg_parser main create_arg_parser extract_width_height inverse_single_scale_trajectories normalise_bounding_box pull_global_features normalise_bounding_boxes discard_global_features change_coordinate_system from_global_to_image_all_cameras collect_test_trajectories extract_global_features_from_trajectory from_global_to_image write_trajectories normalise_trajectories scale_trajectories denormalise_trajectories remove_missing_skeletons discard_steps_from_padded_frames input2table scale_trajectories_three_stds denormalise_all_trajectories write_all_worst_mistakes collect_overlapping_trajectories from_image_to_bounding_box from_image_to_centre_bounding_box concatenate_features local_to_global_coordinates Trajectory normalise_joints detect_most_anomalous_or_most_normal_frames scale_trajectories_zero_one pad_trajectory remove_short_trajectories compute_bounding_boxes_from_image_features assemble_trajectories collect_trajectories denormalise_trajectory load_trajectories uniquify_reconstructions normalise_trajectories_video_resolution extract_center_of_mass extract_global_features uniquify_reconstruction collect_skeletons detect_anomalous_frames compute_bounding_boxes_from_global_features StdScaler write_all_reconstructed_trajectories from_image_to_global extract_centre_of_bounding_box generate_array_of_frames input_trajectories_missing_steps reverse_trajectories input_trajectory_missing_steps from_image_to_top_left_bounding_box is_short_trajectory _train_test_split_through_time compute_worst_mistakes load_anomaly_masks shuffle_data train_test_split_trajectories normalise_trajectory_video_resolution inverse_scale_trajectories train_test_split_through_time extract_input_dim combine_global_and_local_losses eval_complete_rnn_ae_models eval_ae_models eval_rnn_ae_models compute_all_cameras_performance_metrics configure_gpu_resources modified_mean_squared_error modified_mean_squared_error_2 balanced_mean_squared_error modified_mean_absolute_error modified_binary_crossentropy modified_balanced_mean_absolute_error balanced_mean_absolute_error modified_mean_squared_error_3 mean_absolute_error modified_binary_crossentropy_2 mean_squared_error binary_crossentropy train_ae train_complete_rnn_ae train_rnn_ae select_optimiser select_cell select_scaler_model resume_training_from_last_epoch select_loss set_up_logging compute_bounding_box _render_trajectories_skeletons compute_chest_centred_bounding_box draw_line render_video_diff_heatmaps draw_rect render_article_main_figure render_article_main_figure_2 render_trajectories_skeletons maybe_create_dir draw_poly compute_simple_bounding_box render_video_heatmaps_mpedrnn render_video_diff_heatmaps_hasan draw_skeleton insert_anomaly_mask_from_bounding_box _compute_ae_reconstruction_errors compute_ae_reconstruction_errors load_ae_pretrained_models Autoencoder load_pretrained_ae reconstruct_skeletons extract_global_features split_into_train_and_test scale_trajectories_three_stds quantile_transform_errors compute_ae_reconstruction_errors input_trajectories_missing_steps aggregate_autoencoder_data aggregate_autoencoder_evaluation_data scale_trajectories_robust assemble_ground_truth_and_reconstructions change_coordinate_system Trajectory scale_trajectories_zero_one load_anomaly_masks extract_size_features scale_trajectories remove_missing_skeletons load_trajectories eval_ae eval_aes train_ae restore_original_trajectory write_reconstructed_trajectories detect_most_anomalous_or_most_normal_frames compute_worst_mistakes inverse_scale write_predicted_masks normalise_errors_by_bounding_box_area restore_global_coordinate_system compute_num_frames_per_video extract_video_and_skeleton_ids clip_trajectories clip_trajectory generate_array_of_frames write_worst_mistakes eval_combined_model eval_combined_models load_pretrained_combined_model coordinate_change load_complete_rnn_ae_pretrained_models CombinedEncoderDecoder MessagePassingEncoderDecoder train_combined_model summarise_reconstruction_per_frame summarise_reconstruction_errors _aggregate_rnn_autoencoder_data summarise_reconstruction_errors_per_frame discard_information_from_padded_frames aggregate_rnn_autoencoder_data summarise_reconstruction compute_rnn_ae_reconstruction_errors retrieve_future_skeletons _aggregate_rnn_ae_evaluation_data remove_short_trajectories aggregate_rnn_ae_evaluation_data eval_rnn_aes eval_rnn_ae model_from_architecture_specification load_architecture_specification reconstruct_trajectories RNNEncoderDecoder load_pretrained_rnn_ae produce_data_from_model_type train_rnn_ae summarise_errors_per_frame summarise_reconstruction_errors compute_reconstruction_errors discard_errors_from_padded_frames frame_level_metrics ground_truth_and_reconstructions normalizing_3Dconv normalizing_lstm_autoencoder ScoreNormalization visualize add_argument_group add_argument add_parser ArgumentParser set_defaults add_subparsers create_arg_parser gpu_memory_fraction configure_gpu_resources func parse_args gpu_ids join listdir loadtxt shape range where reshape array print normalise_trajectories_video_resolution normalise_bounding_boxes compute_bounding_box reshape any range nanmax nanmin reshape where isnan shape nan reshape array array append range len concatenate pad_trajectory append range len append range len zeros seed permutation join items savetxt split makedirs load join listdir items collect_fn items items concatenate sort round len reshape astype mean shape int64 unique empty enumerate items keys uniquify_reconstruction keys reshape array keys join reshape hstack makedirs savetxt keys split vstack concatenate keys set keys hstack where nanmean isnan nan float32 astype apply_along_axis float32 astype apply_along_axis extract_centre_of_bounding_box hstack extract_width_height items reshape shape tile keys items keys items StdScaler where isnan vstack nan transform fit nanmin items data_min_ where isnan vstack nan tile transform MinMaxScaler fit scale_trajectories_three_stds scale_trajectories_zero_one shape reshape inverse_transform items reshape items inverse_transform int randint seed items update _train_test_split_through_time seed int permutation items items items hsplit astype int32 items apply_along_axis from_image_to_bounding_box from_image_to_global from_global_to_image shape reshape items shape reshape items items compute_bounding_box reshape any hsplit ravel enumerate compute_bounding_box reshape any hsplit ravel enumerate argsort array generate_array_of_frames append arange unique join sorted keys change_coordinate_system ground_truth_and_reconstructions save scale_trajectories remove_missing_skeletons sorted basename append pretrained_models load_trajectories reconstruct_skeletons extract_global_features concatenate set frame_level_anomaly_masks trajectories keys join compute_ae_reconstruction_errors savez print load_ae_pretrained_models load_anomaly_masks array inverse_single_scale_trajectories change_coordinate_system from_global_to_image_all_cameras ground_truth_and_reconstructions save load_pretrained_rnn_ae scale_trajectories discard_steps_from_padded_frames sorted basename add reconstruct_trajectories remove_short_trajectories append assemble_trajectories pretrained_models load_trajectories uniquify_reconstructions extract_global_features compute_reconstruction_errors concatenate set write_all_reconstructed_trajectories frame_level_anomaly_masks listdir keys trajectories join input_trajectories_missing_steps reverse_trajectories summarise_reconstruction_errors discard_errors_from_padded_frames savez print write_reconstructions load_anomaly_masks array split pull_global_features change_coordinate_system from_global_to_image_all_cameras ground_truth_and_reconstructions save scale_trajectories discard_steps_from_padded_frames sorted basename write_all_worst_mistakes concatenate_features local_to_global_coordinates remove_short_trajectories append assemble_trajectories pretrained_models load_trajectories uniquify_reconstructions extract_global_features overlapping_trajectories compute_reconstruction_errors reconstruct concatenate load_complete_rnn_ae_pretrained_models compute_bounding_boxes_from_global_features set write_all_reconstructed_trajectories frame_level_anomaly_masks trajectories keys join deepcopy reverse_trajectories summarise_reconstruction_errors discard_errors_from_padded_frames savez print compute_worst_mistakes write_reconstructions load_anomaly_masks inverse_scale_trajectories array write_bounding_boxes split reconstruction_errors_path ignore_training roc_auc_score len train_with_nonzero_scores_only append range concatenate files load ignore_scaler join print reshape average_precision_score select_scaler_model scaler_model ravel fit load global_model_losses anomaly_mask local_model_losses print files range len ConfigProto set_session Session sum square where sum array square where sqrt sum square where square where sqrt sum array sum log where cast not_equal ones_like not_equal where cast sum ones_like constant not_equal where cast sum ones_like not_equal where sqrt cast sum ones_like constant not_equal where sqrt cast sum cast not_equal ones_like not_equal where cast sum log normalisation_strategy coordinate_system batch_size change_coordinate_system hidden_dims scale_trajectories basename optimiser load_trajectories extract_global_features dump collect_skeletons set_up_logging shuffle Autoencoder global_model root_log_dir resume_training trajectories learning_rate print extract_input_dim train_test_split_trajectories resume_training_from_last_epoch train epochs array loss len normalisation_strategy coordinate_system batch_size cell_type change_coordinate_system pred_length reconstruct_reverse vstack hidden_dims scale_trajectories conditional_prediction input_length list basename input_gap remove_short_trajectories optimiser load_trajectories extract_global_features dump trajectories_path disable_reconstruction_branch set_up_logging shuffle global_model root_log_dir RNNEncoderDecoder zip conditional_reconstruction resume_training trajectories learning_rate input_trajectories_missing_steps print input_missing_steps extract_input_dim resume_training_from_last_epoch train_test_split_through_time train epochs array loss len batch_size cell_type change_coordinate_system pred_length reconstruct_reverse CombinedEncoderDecoder vstack scale_trajectories input_length list basename remove_short_trajectories global_hidden_dims optimiser load_trajectories extract_global_features dump set_up_logging shuffle extra_hidden_dims root_log_dir zip resume_training trajectories deepcopy learning_rate global_normalisation_strategy print local_hidden_dims extract_input_dim local_normalisation_strategy resume_training_from_last_epoch train_test_split_through_time train epochs array loss len join dirname strftime makedirs int load_weights reshape reshape where nanmean hsplit nan ones join sorted imwrite loadtxt reshape astype dict draw_skeleton int32 zip imread listdir range enumerate get join sorted imwrite loadtxt reshape astype dict draw_skeleton int32 zip imread listdir enumerate draw_line line circle int arange line append circle append pop draw_line draw_poly _render_trajectories_skeletons write_dir draw_trajectories_bounding_box person_id print draw_local_skeleton frames ground_truth_trajectories trajectories_colour maybe_create_dir draw_ground_truth_trajectories_skeleton ground_truth_trajectories_colour trajectories draw_trajectories_skeleton draw_ground_truth_trajectories_bounding_box imwrite where compute_simple_bounding_box full_like sorted draw_rect draw_skeleton imread get astype zip listdir enumerate int join loadtxt reshape isnan rectangle any int32 array circle len makedirs nan where join write_dir sorted ground_truth_frames imwrite cvtColor abs applyColorMap print maybe_create_dir zip imread listdir skip_first_n_frames generated_frames join write_dir uint8 imwrite print applyColorMap ground_truth_array astype float32 maybe_create_dir generated_array abs enumerate camera_id imwrite where logical_not frames sorted len maybe_create_dir ground_truth_trajectories nansum append draw_skeleton sum get write_dir video_id generated_trajectories astype nan zip full listdir keys enumerate join items print loadtxt reshape applyColorMap dict isnan int32 quantile MinMaxScaler array fit join load_pretrained_ae listdir load join load_architecture_specification Autoencoder load_weights listdir compile Trajectory use_global_features values use_size_features values values seed items permutation extend argsort append round enumerate len reshape scale_trajectories_robust shape where isnan nan transform RobustScaler fit append coordinates values items coordinates frames repeat append append enumerate sorted zeros_like len astype maximum extend int32 unique append keys split reshape items input_missing_steps values extract_global_features basename compute_ae_reconstruction_errors aggregate_autoencoder_evaluation_data assemble_ground_truth_and_reconstructions print change_coordinate_system predict load_pretrained_ae load_anomaly_masks frame_level_anomaly_masks pretrained_model scale_trajectories trajectories array remove_missing_skeletons load_trajectories join sorted quantile_transform_errors all_trajectories namedtuple print eval_ae all_frame_level_anomaly_masks video_resolution append EvalAeArgs pretrained_models listdir keys output_activation split_into_train_and_test join aggregate_autoencoder_data shape reshape inverse_transform shape reshape shape reshape tile join reshape hstack savetxt unique extract_video_and_skeleton_ids makedirs append split items len split join unique extract_video_and_skeleton_ids save zeros insert_anomaly_mask_from_bounding_box enumerate makedirs join items clip_trajectory reshape ones_like hsplit enumerate float_power reshape apply_along_axis where shape write_anomaly_masks change_coordinate_system write_reconstructed_trajectories compute_rnn_ae_reconstruction_errors write_mistakes write_predicted_masks reconstruct_reverse compute_num_frames_per_video scale_trajectories assemble_ground_truth_and_reconstructions basename apply_along_axis detect_most_anomalous_or_most_normal_frames inverse_scale restore_global_coordinate_system remove_short_trajectories load_trajectories predict extract_global_features write_worst_mistakes overlapping_trajectories load_pretrained_combined_model discard_information_from_padded_frames restore_original_trajectory multiple_outputs retrieve_future_skeletons frame_level_anomaly_masks trajectories aggregate_rnn_ae_evaluation_data deepcopy summarise_reconstruction_errors print summarise_reconstruction compute_worst_mistakes write_reconstructions write_predictions_bounding_boxes load_anomaly_masks pretrained_model write_predictions array loss write_bounding_boxes write_anomaly_masks write_mistakes all_frame_level_anomaly_masks video_resolution roc_auc_score sorted quantile_transform_errors EvalCombinedModelArgs append pretrained_models overlapping_trajectories eval_combined_model listdir keys join all_trajectories namedtuple print average_precision_score write_reconstructions write_predictions_bounding_boxes write_predictions write_bounding_boxes join listdir load_pretrained_combined_model load join MessagePassingEncoderDecoder load_architecture_specification load_weights CombinedEncoderDecoder listdir compile output_activation message_passing batch_size cell_type change_coordinate_system pred_length reconstruct_reverse CombinedEncoderDecoder scale_trajectories input_length basename l1_reg rec_length remove_short_trajectories global_hidden_dims optimiser load_trajectories extract_global_features split_into_train_and_test dump set_up_logging shuffle out_normalisation_strategy multiple_outputs extra_hidden_dims root_log_dir resume_training trajectories deepcopy MessagePassingEncoderDecoder learning_rate input_trajectories_missing_steps join global_normalisation_strategy l2_reg multiple_outputs_before_concatenation print reconstruct_original_data input_missing_steps aggregate_autoencoder_data aggregate_rnn_autoencoder_data local_hidden_dims local_normalisation_strategy resume_training_from_last_epoch train epochs array loss len items coordinates _aggregate_rnn_autoencoder_data vstack append values append stack range len append _aggregate_rnn_ae_evaluation_data values coordinates trajectory_id frames shape append full range len loss_fn shape reshape concatenate unique append summarise_reconstruction_errors_per_frame len mean shape unique empty enumerate summarise_reconstruction_per_frame concatenate reshape shape vstack unique append len mean empty unique enumerate append zeros vstack concatenate append vstack change_coordinate_system compute_rnn_ae_reconstruction_errors reconstruct_reverse load_pretrained_rnn_ae scale_trajectories assemble_ground_truth_and_reconstructions basename remove_short_trajectories predict load_trajectories extract_global_features overlapping_trajectories concatenate discard_information_from_padded_frames retrieve_future_skeletons frame_level_anomaly_masks trajectories aggregate_rnn_ae_evaluation_data deepcopy summarise_reconstruction_errors print load_anomaly_masks pretrained_model array loss join overlapping_trajectories sorted EvalRNNAeArgs all_trajectories namedtuple quantile_transform_errors print eval_rnn_ae average_precision_score all_frame_level_anomaly_masks video_resolution append pretrained_models listdir keys roc_auc_score load join model_from_architecture_specification load_architecture_specification load_weights listdir compile output_activation use_first_step_as_reference l1_reg extract_delta model_type rec_length join l2_reg produce_data_from_model_type extract_global_features split_into_train_and_test deepcopy input_trajectories_missing_steps aggregate_autoencoder_data print aggregate_rnn_autoencoder_data concatenate change_coordinate_system shuffle scale_trajectories sorted zeros_like astype maximum int64 append keys split sorted zeros_like len astype maximum extend int64 append keys split astype empty_like int64 summary_fn unique enumerate input2table loss_fn summarise_errors_per_frame load str visualize score reshape name len get_fit_params_string files ravel range fit show subplots set_title scatter set_ylabel hist legend | This is the code for the CVPR'19 paper ["Learning Regularity in Skeleton Trajectories for Anomaly Detection in Videos".](https://openaccess.thecvf.com/content_CVPR_2019/html/Morais_Learning_Regularity_in_Skeleton_Trajectories_for_Anomaly_Detection_in_Videos_CVPR_2019_paper.html) # Environment Setup First please create an appropriate environment using conda: > conda env create -f environment.yml > conda activate tbad # Download Data Due to space constraints in the Github repository, please download the data from the following link and place the `data` folder in this directory. Link: [trajectories](https://bit.ly/2TWCxFY) # Test Pre-Trained Models To evaluate pre-trained models run the evaluate.py script. | 910 |
RoninTheKid/yessir | ['face swapping'] | ['DeepFaceLab: Integrated, flexible and extensible face-swapping framework'] | models/Model_XSeg/Model.py mainscripts/VideoEd.py XSegEditor/QImageDB.py merger/MergerScreen/MergerScreen.py mainscripts/dev_misc.py merger/InteractiveMergerSubprocessor.py samplelib/SampleGeneratorImageTemporal.py core/leras/models/__init__.py samplelib/Sample.py core/joblib/__init__.py core/imagelib/estimate_sharpness.py core/joblib/MPFunc.py core/leras/layers/DenseNorm.py core/leras/nn.py samplelib/SampleProcessor.py core/leras/archis/__init__.py core/leras/layers/Conv2D.py core/stdex.py core/imagelib/text.py core/leras/models/PatchDiscriminator.py core/leras/__init__.py models/Model_XSeg/__init__.py core/leras/layers/DepthwiseConv2D.py core/imagelib/equalize_and_stack_square.py samplelib/__init__.py core/imagelib/sd/draw.py core/randomex.py core/joblib/MPClassFuncOnDemand.py core/leras/optimizers/__init__.py core/leras/initializers/__init__.py mainscripts/Extractor.py mainscripts/Sorter.py core/mathlib/__init__.py core/leras/optimizers/OptimizerBase.py core/imagelib/draw.py samplelib/SampleGeneratorBase.py core/leras/layers/BatchNorm2D.py core/leras/layers/ScaleAdd.py XSegEditor/QCursorDB.py models/__init__.py core/joblib/SubprocessGenerator.py DFLIMG/DFLIMG.py core/leras/layers/__init__.py models/ModelBase.py facelib/FaceEnhancer.py core/joblib/ThisThreadGenerator.py XSegEditor/QIconDB.py core/imagelib/blursharpen.py merger/MergeMasked.py core/cv2ex.py models/Model_Quick96/__init__.py main.py core/leras/archis/ArchiBase.py localization/localization.py facelib/LandmarksProcessor.py core/leras/layers/Dense.py models/Model_SAEHD/__init__.py mainscripts/Merger.py mainscripts/XSegUtil.py core/imagelib/filters.py XSegEditor/QStringDB.py core/leras/initializers/CA.py core/imagelib/__init__.py core/leras/models/ModelBase.py samplelib/SampleGeneratorFaceTemporal.py core/leras/layers/InstanceNorm2D.py facelib/S3FDExtractor.py samplelib/SampleLoader.py samplelib/SampleGeneratorFaceXSeg.py core/qtex/qtex.py DFLIMG/__init__.py core/leras/optimizers/RMSprop.py core/structex.py core/leras/layers/BlurPool.py facelib/__init__.py core/leras/archis/DeepFakeArchi.py core/imagelib/SegIEPolys.py mainscripts/FacesetEnhancer.py merger/FrameInfo.py models/Model_SAEHD/Model.py core/mplib/__init__.py samplelib/SampleGeneratorFacePerson.py merger/MergerScreen/__init__.py core/imagelib/common.py core/imagelib/sd/__init__.py core/leras/layers/AdaIN.py core/leras/layers/LayerBase.py facelib/FANExtractor.py core/leras/models/XSeg.py samplelib/SampleGeneratorFace.py merger/MergeAvatar.py core/pathex.py core/leras/layers/Conv2DTranspose.py core/qtex/QXMainWindow.py merger/__init__.py core/imagelib/morph.py core/joblib/SubprocessorBase.py mainscripts/Trainer.py samplelib/PackedFaceset.py core/mathlib/umeyama.py mainscripts/Util.py core/interact/__init__.py core/leras/layers/FRNorm2D.py core/leras/layers/Saveable.py core/leras/layers/TLU.py facelib/XSegNet.py core/mplib/MPSharedList.py core/leras/models/CodeDiscriminator.py facelib/FaceType.py XSegEditor/XSegEditor.py merger/MergerConfig.py core/leras/ops/__init__.py core/interact/interact.py core/leras/device.py core/imagelib/sd/calc.py core/imagelib/reduce_colors.py samplelib/SampleGeneratorImage.py localization/__init__.py DFLIMG/DFLJPG.py core/qtex/__init__.py core/qtex/QSubprocessor.py core/qtex/QXIconButton.py samplelib/SampleGeneratorFaceCelebAMaskHQ.py core/imagelib/warp.py core/osex.py core/imagelib/color_transfer.py models/Model_Quick96/Model.py process_dev_test process_merge process_videoed_cut_video process_train process_faceset_enhancer process_xsegeditor process_xsegapply process_xsegremove process_xsegremovelabels process_videoed_video_from_sequence process_xsegfetch process_util process_extract fixPathAction process_videoed_extract_video process_sort bad_args process_videoed_denoise_image_sequence cv2_imwrite cv2_resize cv2_imread set_process_dpi_aware get_screen_size set_process_lowest_prio get_image_paths move_all_files write_bytes_safe get_first_file_by_stem get_image_unique_filestem_paths get_all_dir_names get_file_paths delete_all_files scantree get_paths get_all_dir_names_startswith random_normal suppress_stdout_stderr struct_unpack LinearMotionBlur blursharpen _scale_array color_transfer color_transfer_idt color_transfer_mkl reinhard_color_transfer lab_image_stats linear_color_transfer channel_hist_match color_transfer_mix color_transfer_sot color_hist_match overlay_alpha_image cut_odd_image normalize_channels draw_polygon draw_rect equalize_and_stack_square compute _calculate_sharpness_metric marziliano_method get_block_contrast _simple_thinning estimate_sharpness is_edge_block sobel apply_random_motion_blur apply_random_rgb_levels apply_random_hsv_shift apply_random_bilinear_resize apply_random_gaussian_blur morphTriangle morph_by_points applyAffineTransform reduce_colors SegIEPolys SegIEPolyType SegIEPoly get_text_image draw_text_lines draw_text _get_pil_font get_draw_text_lines warp_by_params gen_warp_params dist_to_edges random_circle_faded circle_faded InteractBase InteractColab InteractDesktop MPClassFuncOnDemand MPFunc SubprocessGenerator Subprocessor ThisThreadGenerator Devices Device nn ArchiBase DeepFakeArchi CAInitializerSubprocessor initializers AdaIN BatchNorm2D BlurPool Conv2D Conv2DTranspose Dense DenseNorm DepthwiseConv2D FRNorm2D InstanceNorm2D LayerBase Saveable ScaleAdd TLU CodeDiscriminator ModelBase UNetPatchDiscriminator PatchDiscriminator XSeg dssim concat average_gv_list resize2d_bilinear flatten rgb_to_lab resize2d_nearest space_to_depth tf_gradients random_binomial style_loss gelu init_weights tf_get_value upsample2d reshape_4D batch_set_value max_pool average_tensor_list gaussian_blur depth_to_space OptimizerBase RMSprop umeyama get_power_of_two rotationMatrixToEulerAngles polygon_area ArrayFillerSubprocessor MPSharedList IndexHost Index2DHost ListHost DictHostCli DictHost QSubprocessor QDarkPalette QActionEx QSize_to_np QImage_from_np QImage_to_np QPixmap_from_np QPoint_to_np QPoint_from_np QXIconButton QXMainWindow DFLIMG DFLJPG FaceEnhancer FaceType FANExtractor blur_image_hull_mask mirror_landmarks get_face_struct_mask estimate_pitch_yaw_roll convert_98_to_68 expand_eyebrows get_rect_from_landmarks get_transform_mat draw_rect_landmarks get_cmask transform_points estimate_averaged_yaw calc_face_pitch alpha_to_color get_image_eye_mask draw_landmarks get_image_hull_mask S3FDExtractor XSegNet dev_test_68 dev_test1 dev_resave_pngs extract_vggface2_dataset extract_umd_csv dev_segmented_trash process_folder FacesetEnhancerSubprocessor extract_video video_from_sequence denoise_image_sequence cut_video remove_xseg remove_xseg_labels apply_xseg fetch_xseg FrameInfo InteractiveMergerSubprocessor MergeFaceAvatar process_frame_info MergeMasked MergeMaskedFace MergerConfigMasked MergerConfigFaceAvatar MergerConfig ScreenManager ScreenAssets Screen ModelBase PreviewHistoryWriter import_model QModel SAEHDModel XSegModel PackedFaceset Sample SampleType SampleGeneratorBase SampleGeneratorFace SampleGeneratorFaceCelebAMaskHQ MaskType SampleGeneratorFacePerson Index2DHost SampleGeneratorFaceTemporal SampleGeneratorFaceXSeg SegmentedSampleFilterSubprocessor SampleGeneratorImage SampleGeneratorImageTemporal FaceSamplesLoaderSubprocessor SampleLoader SampleProcessor QCursorDB QIconDB QImageDB QStringDB ImagePreviewSequenceBar QUIConfig QCanvasOperator LoaderQSubprocessor CanvasConfig OpMode QCanvas DragType ViewLock ColorScheme QCanvasControlsLeftBar start QCanvasControlsRightBar MainWindow PTEditMode main set_process_lowest_prio main set_process_lowest_prio unpack_faceset pack save_faceset_metadata log_info restore_faceset_metadata_folder pack_faceset save_faceset_metadata_folder restore_faceset_metadata Path input_dir unpack recover_original_aligned_filename set_process_lowest_prio add_landmarks_debug_images main set_process_lowest_prio main set_process_lowest_prio output_ext fps extract_video output_dir input_file set_process_lowest_prio audio_track_id from_time bitrate to_time cut_video input_file set_process_lowest_prio factor denoise_image_sequence set_process_lowest_prio input_dir video_from_sequence set_process_lowest_prio Path set_process_lowest_prio input_dir process_folder dev_test set_process_lowest_prio input_dir start Path set_process_lowest_prio input_dir model_dir apply_xseg Path input_dir set_process_lowest_prio Path remove_xseg set_process_lowest_prio input_dir remove_xseg_labels Path set_process_lowest_prio input_dir Path fetch_xseg set_process_lowest_prio input_dir print_help exit loader_func asarray bytearray imencode suffix shape normalize_channels resize nice SetPriorityClass HANDLE GetCurrentProcess SetProcessDPIAware user32 write_bytes parent name unlink rename exists is_dir scandir str list scandir any Path scantree exists append remove get_image_paths name stem set add verbose_print_func Path exists Path exists Path exists str list lower scandir Path startswith append exists str sorted list path lower scandir Path exists name Path rename get_file_paths unlink Path get_file_paths normal empty prod range calcsize warpAffine ones getRotationMatrix2D zeros sum medianBlur addWeighted ones zeros GaussianBlur max dtype reshape astype copy argsort shape bilateralFilter fill empty range eps T clip reshape eig dot shape sqrt cov mean diag T reshape min astype float32 empty_like solve dot shape histogram interp max range _scale_array uint8 astype float32 merge lab_image_stats COLOR_LAB2BGR cvtColor split T reshape transpose mean dot eigh eye cholesky split min max float64 astype shape unique interp ravel dtype astype shape channel_hist_match range uint8 astype float32 COLOR_BGR2LAB color_transfer_sot COLOR_LAB2BGR cvtColor uint8 color_transfer_idt color_transfer_mkl astype float32 reinhard_color_transfer linear_color_transfer color_transfer_sot clip shape repeat len shape shape range tuple line range len draw_polygon concatenate shape resize expand_dims max enumerate T convolve square mean sqrt array shape zeros float64 marziliano_method astype canny sobel gradient atan2 shape any zeros round range int exp slice get_block_contrast shape flipud round zeros is_edge_block rot90 range cvtColor COLOR_BGR2GRAY rand random clip array COLOR_HSV2BGR random merge COLOR_BGR2HSV randint clip cvtColor split LinearMotionBlur randint random randint GaussianBlur random int rand random shape resize float32 getAffineTransform float32 fillConvexPoly shape boundingRect int32 applyAffineTransform zeros expand_dims array shape morphTriangle zeros simplices fromarray uint8 convert astype COLOR_RGB2BGR array cvtColor truetype asarray Draw get_default_ttf_font_name concatenate text new _get_pil_font shape clip draw_text range len draw_text_lines zeros shape T random astype copy float32 getRotationMatrix2D dict uniform linspace random_normal warpAffine remap resize norm clip einsum concatenate norm reshape empty abs clip max random randint initializer inputs append batch_set_value run gradients expand_dims __enter__ __exit__ enumerate reduce_mean __enter__ __exit__ concat pow tanh sqrt pi as_list reshape tile transpose value resize transpose value resize transpose reshape transpose randint float32 pad make_kernel tile depthwise_conv2d gaussian_blur dtype constant arange reshape float32 square reduce_mean reducer cast softmax tile max as_list reshape transpose as_list reshape transpose constant reshape multiply matmul cast svd T ones matrix_rank mean dot eye sum diag sqrt atan2 shape Format_Grayscale8 Format_BGR888 Format_ARGB32 height reshape convertToFormat width constBits setsize range squeeze invertAffineTransform shape transform expand_dims get norm getAffineTransform polygon_area astype float32 transform_points sqrt estimate_averaged_yaw array transform_points FULL_NO_ALIGN get_transform_mat float32 array copy concatenate expand_eyebrows fillConvexPoly convexHull zeros int getStructuringElement astype fillConvexPoly MORPH_ELLIPSE convexHull dilate zeros GaussianBlur shape zeros concatenate process copy blend alpha_to_color zeros get_image_hull_mask gdf max clip int blur getStructuringElement min erode argwhere MORPH_ELLIPSE expand_dims copy draw_landmarks zeros expand_eyebrows concatenate polylines tuple shape get_image_hull_mask array circle get_transform_mat draw_rect transform_points draw_polygon draw_landmarks array array rotationMatrixToEulerAngles concatenate astype float32 pi solvePnP zeros array clip get pop get_image_paths parent log_info name stem progress_bar_generator get_all_dir_names Path mkdir run fromString split cv2_imread Path normalize_channels exists input_bool str log_info name stem append get_image_paths get_rect_from_landmarks unlink mkdir parent cv2_imwrite progress_bar_generator read_text split get str get_image_paths parent log_info name len unlink Path mkdir split log_err run range exists fromString input_bool get_image_paths progress_bar_generator get_all_dir_names Path x get_image_paths cv2_imwrite progress_bar_generator cv2_imread Path get_image_paths parent name stem rename Path mkdir append input_bool join get_image_paths log_info parent name copy unlink rmtree mkdir run update str get_image_paths parent input_str stem output get_first_file_by_stem unlink input_int mkdir Path log_err input run str suffix parent input_str stem overwrite_output input_int log_err Path input max run update str suffix parent progress_bar_generator output input_int rename log_err Path run clip enumerate suffix input_str wait input_int Path max input_bool str stem input update run_async get_image_paths close mkdir parent overwrite_output get_first_file_by_stem log_err probe load extract initialize get_image_paths log_info set_xseg_mask progress_bar_generator astype float32 get_resolution ask_choose_device shape XSegNet resize save load str get_image_paths log_info parent name has_polys progress_bar_generator copy get_seg_ie_polys mkdir load get_image_paths log_info set_xseg_mask input_str progress_bar_generator has_xseg_mask save load get_image_paths log_info input_str has_seg_ie_polys progress_bar_generator save set_seg_ie_polys warpAffine get_transform_mat astype float32 cv2_imread normalize_channels filename clip sharpen_func sharpen_mode concatenate predictor_func add_source_image process_frame_info temporal_face_count append range sharpen_amount predictor_func color_transfer_mkl motion_power bicubic_degrade_power motion_blur_power linear_color_transfer color_transfer_mix boundingRect resize reduce_colors max clip face_enhancer_func hist_match_threshold medianBlur super_resolution_power WARP_INVERSE_MAP ones LinearMotionBlur shape pad blur_mask_modifier image_denoise_power masked_hist_match blursharpen range color_hist_match BORDER_TRANSPARENT warpAffine sharpen_mode xseg_256_extract_func seamlessClone color_transfer_idt astype copy reinhard_color_transfer empty_like motion_deg INTER_CUBIC MORPH_ELLIPSE color_transfer_sot dilate GaussianBlur get_image_hull_mask NORMAL_CLONE uint8 int erode_mask_modifier getStructuringElement get_transform_mat float32 erode argwhere blursharpen_amount color_degrade_power landmarks_list concatenate astype float32 cv2_imread shape normalize_channels MergeMaskedFace filepath clip enumerate str parent cv2_imread locals __import__ globals dict setApplicationName setPalette QDarkPalette Path show str initialize log_info setWindowIcon addApplicationFont AA_EnableHighDpiScaling setStyle setFont gettempdir setAttribute QApplication path_contains app_icon MainWindow exec_ parent QFont raise_ AA_UseHighDpiPixmaps | <table align="center" border="0"> <tr><td colspan=2 align="center"> # DeepFaceLab <a href="https://arxiv.org/abs/2005.05535"> <img src="https://static.arxiv.org/static/browse/0.3.0/images/icons/favicon.ico" width=14></img> https://arxiv.org/abs/2005.05535</a> ### the leading software for creating deepfakes <img src="doc/DFL_welcome.png" align="center"> </td></tr> <tr><td colspan=2 align="center"> | 911 |
Rose-STL-Lab/HDR-IL | ['imitation learning'] | ['Deep Imitation Learning for Bimanual Robotic Manipulation'] | Table_Lift_HDR-IL/utils.py Table_Lift_HDR-IL/TrainDynamicModels.py Table_Lift_HDR-IL/GenerateProjections.py Simulation Code/primitives.py Simulation Code/peg_in_hole_visualization.py Simulation Code/table_lift_and_place_visualization.py Simulation Code/peg_in_hole.py Simulation Code/__init__.py Table_Lift_HDR-IL/TrainPlanningModel.py Table_Lift_HDR-IL/Models/HDRIL_Models.py Simulation Code/lift_and_place_table.py getJointStates place getRevoluteJoints retract front_grasp extend getGripperLocations lift getLinkNames move_sideways getJointStates getRevoluteJoints getJointNames main visualize getJointStates getRevoluteJoints retract front_grasp connect front_grasp2 lift lower getLinkNames getJointNames move_sideways get_primitive_data getJointStates getPredictedGripperPositions getRevoluteJoints main visualize diff_dtw_loss _traceback analyzeErrors DTW_Loss label_primitive BaxterDataset dtw smooth_min _Loss outputHeaders selectColumnLabels plotresults GATLayer PlanningGAT PlanningVAE Decoder MultiHeadGATLayer VAE GAT PlanningDecoder GAT2 append getNumJoints range append getRevoluteJoints getJointStates getRevoluteJoints calculateInverseKinematics setJointMotorControlArray stepSimulation append range getJointStates getRevoluteJoints calculateInverseKinematics setJointMotorControlArray stepSimulation append range list getJointStates getRevoluteJoints calculateInverseKinematics setJointMotorControlArray stepSimulation append range list getJointStates getRevoluteJoints calculateInverseKinematics setJointMotorControlArray stepSimulation append range list getJointStates getRevoluteJoints calculateInverseKinematics setJointMotorControlArray stepSimulation append range list getJointStates getRevoluteJoints calculateInverseKinematics setJointMotorControlArray stepSimulation append range getNumJoints decode range uniform print getNumJoints range getQuaternionFromEuler getRevoluteJoints setGravity GUI connect shape loadURDF range disconnect setJointMotorControlArray multiplyTransforms stepSimulation setTimeStep invertTransform removeBody print calculateInverseKinematics resetSimulation resetDebugVisualizerCamera print shape read_csv range visualize append get_primitive_data getRevoluteJoints calculateInverseKinematics setJointMotorControlArray stepSimulation get_primitive_data range list getRevoluteJoints calculateInverseKinematics setJointMotorControlArray stepSimulation get_primitive_data range get_primitive_data get_primitive_data get_primitive_data list getJointStates getRevoluteJoints calculateInverseKinematics setJointMotorControlArray stepSimulation get_primitive_data range getQuaternionFromEuler getJointStates getRevoluteJoints DataFrame GUI setGravity calculateInverseKinematics connect setJointMotorControlArray range resetSimulation to_csv stepSimulation loadURDF resetDebugVisualizerCamera append setTimeStep read_csv list show tolist axvline ylabel add title savefig legend gca plot set_xticklabels subtract tight_layout sqrt xlabel set_xticks figure fill_between array len subplots arange rand to_numpy show set_title selectRows shape savefig legend sum range plot copy mean sqrt print zeros fill_between std read_csv insert shape argmin array max min dist _traceback copy zeros full range mean exp log max min dist clone _traceback smooth_min zeros full range cat | # Deep Imitation Learning for Bimanual Robotic Manipulation This repository is for the NeurIPS 2020 paper "Deep Imitation Learning for Bimanual Robotic Manipulation" (https://papers.nips.cc/paper/2020/file/18a010d2a9813e91907ce88cd9143fdf-Paper.pdf). We present a deep imitation learning framework for robotic bimanual manipulation in a continuous state-action space. Our model learns to imitate table lifting movements in simulation and generalize the learned skills to tables at different locations. The repository contains the files we used to generate our training data using and the models we used to obtain our experiment results. We used Pybullet to generate our simulations and Pytorch to build our deep learning models. More details are found below. # Demonstrations We created the following tasks in PyBullet to generate training data. The GIFs below are one instance of our demonstration for each of the two tasks. A video file showing our model predictions can be found in "Simulations.mov" ### Table Lifting Task Demonstration  ### Peg-In-Hole Task Demonstration  # Generating Data | 912 |
RotemMayo/NAF | ['density estimation', 'speech synthesis'] | ['Neural Autoregressive Flows'] | external_maf/datasets/__init__.py combine_bg_sig_dataset.py external_maf/__init__.py maf_experiments.py external_maf/bsds300.py external_maf/hepmass.py vae_experiments.py auto_encoder_losses.py external_maf/power.py external_maf/util.py external_maf/lhc_binned.py auto_encoder_dataloader.py utils.py auto_encoder.py ops.py external_maf/mnist.py external_maf/gas.py beta1_vae_model.py test_model.py loss_filter.py steps_plot.py external_maf/lhc.py external_maf/cifar10.py download_datasets.py auto_encoder_model.py sf_sinewave.py external_maf/miniboone.py load parameter_search trim_outliers test plot_histogram run_net plot_losses save main train DataLoaderGenerator Losses BasicAutoEncoder load train_and_save_model VAE main loss main load_omniglot_image load_mnist_image load_cifar10_image load_bmnist_image InputOnly load_maf_data DatasetWrapper step sigmoids get_dataloader save_plot print_to_file load_model trim_outliers save_densities test_model all_plots create_pdf plot_tsne normalize_data binned_main main get_scores print_cuts plot_histograms preprocess_dataframe plot_mj get_n_signal_dataset plot_params NumpyDataset get_mjj plot_cuts svd_whiten BSDS300 CIFAR10 GAS get_correlation_numbers load_data load_data_and_clean load_data_and_clean_and_split load_data_no_discrete_normalised_as_array HEPMASS load_data_no_discrete load_data_no_discrete_normalised load_data load_data_normalised load_data LHC LHC_BINNED load_data_normalised load_data load_data_normalised load_data MINIBOONE MNIST POWER load_data_split_with_noise load_data load_data_normalised discrete_sample load logit ess_importance probs2contours ess_mcmc calc_whitening_transform copy_model_parms one_hot_encode isdistribution plot_pdf_marginals disp_imdata isposint plot_hist_marginals logistic save whiten make_folder sum format collect criterion backward print float zero_grad range tqdm reset save append step net load format dropout print encoder_layer_sizes decoder_layer_sizes BasicAutoEncoder AdamW parameters load_state_dict load format collect print plot_histogram eval save array exists yscale format plot print close savefig figure int format trim_outliers xlabel print close ylabel title hist savefig figure legend load format BasicAutoEncoder AdamW DataLoaderGenerator test parameters plot_losses train exists int deepcopy format join print random system run_net round parameter_search MSELoss sum exp train_it NumpyDataset Adam VAE parameters DataLoader save Adam VAE int format train_and_save_model get_n_signal_dataset read_csv drop load signal_percent load_model ones nan_to_num ROOT normalize_data save append get_scores zeros reshape format astype load astype OneHotEncoder open arange reshape_data astype delete choice loadmat update load read without_keys model loads isfile parse_args concatenate mean shape randint std Variable DataLoader numpy array x savefig print output add_page image FPDF T save_plot concatenate tsne close to_csv read_csv add_legend DataFrame fit_transform exists save_plot format trim_outliers xlabel hstack close ylabel tqdm title hist figure legend range save_plot format xlabel min hexbin ylabel plot_tsne close tqdm create_pdf title mkdir figure legend abs range plot_histograms int str print_to_file print sum load str format signal_percent load_model print_to_file print ones all_plots nan_to_num ROOT normalize_data save append get_scores print_cuts zeros astype float32 get_dataloader format exists print Variable reshape concat to_csv tqdm zip append DataFrame read_csv test_model tqdm mkdir range len format save_densities hist2d figure colorbar plot_params concat min max seed full choice dot svd print FastICA components_ mean transform to_numpy svd_whiten std fit format plot print logical_and tqdm figure append sum max hlines read_pickle drop corr sum get_correlation_numbers mean any load_data std drop int as_matrix read_csv load_data drop mean std load_data_no_discrete int T Counter load_data_no_discrete_normalised append load int concatenate shuffle nan_to_num shape randint mean vstack load_data std seed int RandomState rand hstack shuffle delete load_data zeros load_data_split_with_noise subplots ndarray isinstance mpl_connect flatten set_visible plot_page prod rand sum zeros_like mean shape xrange sum ones_like asarray cumsum reshape flatten shape argsort show list asarray subplots probs2contours plot concatenate vlines reshape set_xlim ndim shape eval linspace xrange meshgrid contour set_ylim show int asarray subplots vlines plot set_xlim sqrt hist xrange set_ylim dump close open close open T eig mean sqrt dot dot copy parms get_value set_value izip zeros makedirs | # NAF ML4Jets HUJI
This is not the original NAF Repo! All credit for the orignal architecture goes to https://github.com/CW-Huang.
## Original docs
Experiments for the Neural Autoregressive Flows paper: https://arxiv.org/abs/1804.00779
This repo depends on another library for pytorch modules: https://github.com/CW-Huang/torchkit
| 913 |
Rowing0914/Atari-Grand-Challenge-Processor | ['imitation learning'] | ['The Atari Grand Challenge Dataset'] | pre_processing.py Atari_Imitation/Atari_Traj_Util.py Atari_Imitation/Atari_Imitation.py tarin main build_cnn_model test preprocess load_traj_prepro Sequential add Dense Conv2D Flatten load str zip print compile TensorBoard to_categorical dict mkdir save unique max load_traj_prepro fit argmax predict_on_batch render reset preprocess append step array range clear_session make str load_model test Monitor save tarin build_cnn_model n sorted concatenate print tolist close astype open append listdir array read_csv split _preprocess tolist astype empty enumerate | ## Dataset - Images (`screens/env_name/episode_num/filename.npy`): 84x84 in Gray-scale - Actions (`trajectories/env_name/summary.txt`): (num_of_episode, frame, reward, score, terminal, action) ### Directory Structure - Total frame: 9,679,856 - Whole data size: 97.7 GB - Directory Architecture: ```shell /atari_v2_release/ ├── screens # frames of game scenes | 914 |
RowitZou/RankAE | ['denoising'] | ['Unsupervised Summarization for Chat Logs with Topic-Oriented Ranking and Context-Aware Auto-Encoders'] | src/models/predictor.py src/prepro/utils.py src/train.py src/models/rankae_trainer.py src/prepro/data_builder.py src/models/rankae.py src/models/data_loader.py src/distributed.py src/others/logging.py src/models/encoder.py src/models/generator.py src/preprocess.py src/models/reporter.py src/models/decoder.py src/models/neural.py src/others/tokenization.py src/translate/penalties.py src/models/adam.py src/others/utils.py src/models/optimizers.py src/translate/beam.py src/models/loss.py src/train_abstractive.py all_gather_list all_reduce_and_rescale_tensors is_master multi_init str2bool str2bool validate test_text train_multi validate_ test ErrorHandler baseline train_single train str2bool run Adam batch_size_fn Batch Dataloader load_dataset DataIterator TransformerDecoderState TransformerDecoder TransformerDecoderLayer Classifier RelativePositionalEncoding DistancePositionalEncoding Bert PositionalEncoding TransformerEncoder TransformerEncoderLayer CopyGenerator Generator collapse_copy_scores shards CopyGeneratorLoss LossComputeBase filter_shard_state CopyGeneratorLossCompute LabelSmoothingLoss LogisticLossCompute NMTLossCompute abs_loss GlobalAttention PositionwiseFeedForward gumbel_soft2hard sequence_mask gelu aeq DecoderState MultiHeadedAttention gumbel_softmax use_gpu Optimizer build_optim build_optim_other build_optim_bert Translator build_predictor Translation RankAE _tally_parameters build_trainer Trainer ReportMgr ReportMgrBase Statistics build_report_manager init_logger BasicTokenizer WordpieceTokenizer load_vocab whitespace_tokenize _is_whitespace _is_control BertTokenizer _is_punctuation tile rouge_results_to_str test_length test_bleu prepro BertData _prepro _get_ngrams _get_word_ngrams Beam GNMTGlobalScorer PenaltyBuilder print get_rank init_process_group all_reduce_buffer element_size numel all_reduce zero_ div_ append list bytes tolist dumps get_world_size _out_buffers ByteTensor loads all_gather item append _in_buffer cuda range len Process pid join add_child get_context SimpleQueue ErrorHandler start init_logger info world_size range append print gpu_ranks multi_init world_size train_single setattr test_length str info close Dataloader test_batch_ex_size Rouge test_bleu load_dataset get_scores rouge_results_to_str FilesRouge test_batch_size flush open join sorted int info getmtime glob sort validate_ test sleep model_path test_all append enumerate load from_pretrained vocab AbsSummarizer validate print build_predictor Dataloader test_from eval bert_dir load_dataset test_batch_ex_size info vars setattr keys test_batch_size load from_pretrained vocab AbsSummarizer validate print build_predictor Dataloader test_from eval bert_dir load_dataset test_batch_ex_size info vars setattr keys test_batch_size load from_pretrained vocab AbsSummarizer print build_predictor Dataloader translate test_from eval bert_dir load_dataset test_batch_ex_size info vars setattr keys test_batch_size train_single train_multi from_pretrained sep_optim build_trainer log_file train_steps seed str set_device build_optim_other init_logger vocab train_from train_from_ignore_optim manual_seed info vars setattr keys load AbsSummarizer build_optim_bert bert_dir train glob data_path sorted shuffle max arange size ex_segs repeat_interleave index_select index_fill_ index_add_ append tensor range len NLLLoss CopyGeneratorLoss CopyGeneratorLossCompute LabelSmoothingLoss NMTLossCompute to len items requires_grad isinstance clone append Tensor split items backward filter_shard_state extend dict zip split next numel log_softmax softmax detach scatter_ exp detach scatter_ items list Optimizer set_parameters named_parameters lr max_grad_norm load_state_dict optim is_tensor cuda values state_dict items Optimizer set_parameters lr_bert max_grad_norm load_state_dict optim is_tensor cuda values state_dict items Optimizer lr_other set_parameters max_grad_norm load_state_dict optim is_tensor cuda values state_dict Translator alpha GNMTGlobalScorer sum accum_count int SummaryWriter generator vocab _tally_parameters model_path print to Trainer report_every info world_size ReportMgr abs_loss SummaryWriter tensorboard strftime report_every ReportMgr tensorboard_log_dir setFormatter getLogger addHandler StreamHandler Formatter setLevel INFO FileHandler OrderedDict strip split category category startswith startswith category ord len range split sum len list view size contiguous range len raw_path glob _prepro pjoin info append max range enumerate len load max collect len preprocess_train preprocess_test info append save BertData exists enumerate open tuple add set range len sum | # RankAE Pytorch implementation of the AAAI-2021 paper: [Unsupervised Summarization for Chat Logs with Topic-Oriented Ranking and Context-Aware Auto-Encoders](https://ojs.aaai.org/index.php/AAAI/article/view/17724/17531). The code is partially referred to https://github.com/nlpyang/PreSumm. ## Requirements * Python 3.6 or higher * torch==1.1.0 * pytorch-transformers==1.1.0 * torchtext==0.4.0 * rouge==0.3.2 * tensorboardX==2.1 | 915 |
RuibinMa/comp755project-ruibinma | ['image retrieval'] | ['Fine-tuning CNN Image Retrieval with No Human Annotation', 'CNN Image Retrieval Learns from BoW: Unsupervised Fine-Tuning with Hard Examples'] | cirtorch/examples/analysis.py cirtorch/networks/imageretrievalnet.py pycolmap/pycolmap/__init__.py pycolmap/tools/transform_model.py cirtorch/examples/generate_positive_colon.py cirtorch/datasets/genericdataset.py pycolmap/tools/save_cameras_as_ply.py cirtorch/examples/train_colon.py pycolmap/tools/write_depthmap_to_ply.py cirtorch/datasets/testdataset.py cirtorch/examples/test.py pycolmap/pycolmap/rotation.py cirtorch/datasets/colon_naive_divide.py cirtorch/examples/cnncolonslam.py pycolmap/pycolmap/cvt_bin_txt.py pycolmap/tools/write_camera_track_to_bundler.py cirtorch/datasets/datahelpers.py pycolmap/tools/delete_images.py cirtorch/layers/loss.py cirtorch/examples/train.py cirtorch/examples/infer.py cirtorch/utils/whiten.py cirtorch/utils/download.py cirtorch/datasets/generate_positive_colon.py cirtorch/layers/functional.py cirtorch/layers/pooling.py pycolmap/pycolmap/scene_manager.py cirtorch/examples/generate_RNN_training.py cirtorch/utils/general.py cirtorch/datasets/traindataset.py cirtorch/__init__.py pycolmap/pycolmap/database.py pycolmap/tools/impute_missing_cameras.py pycolmap/pycolmap/camera.py cirtorch/examples/modify_impath.py colmap_retrieval/test.py cirtorch/utils/evaluate.py cirtorch/layers/normalization.py pycolmap/pycolmap/image.py cirtorch/examples/calculate_mean_std.py split_images imresize collate_tuples default_loader cid2filename accimage_loader flip pil_loader visualize_test get_num_images_of_cluster shuffle is_hard_positive generate process_trainval_cluster process_test_cluster get_clusters visualize ImagesFromList ImagesFromDataList config_qimname configdataset config_imname TuplesDataset compute image_loader Graph Node colon_slam main visualization visualize_test get_num_images_of_cluster shuffle is_hard_positive generate process_trainval_cluster process_test_cluster get_clusters visualize generate_overlap generate_non_overlap generate main visualization main change_path change_dict main validate AverageMeter test save_checkpoint set_batchnorm_eval main train validate AverageMeter test save_checkpoint set_batchnorm_eval main train l2n mac rmac spoc contrastive_loss gem ContrastiveLoss L2N RMAC MAC SPoC GeM extract_ss ImageRetrievalNet extract_vectors init_network extract_ms download_train download_test compute_ap compute_ap_old compute_map compute_map_and_print get_data_root get_root htime sha256_hash pcawhitenlearn whitenlearn whitenapply get_scores run_colmap_retriever Camera opencv_distortion simple_radial_distortion radial_distortion cvt_bin_txt get_image_ids_from_pair_id add_descriptors add_inlier_matches add_matches add_image COLMAPDatabase get_pair_id add_camera blob_to_array main add_keypoints array_to_blob Image DualQuaternion Quaternion rotation_matrix_to_axis_angle axis_angle_to_rotation_matrix cross_prod_matrix SceneManager main main interpolate_hermite interpolate_linear main save_camera_ply main main main join format print sort copyfile rmtree listdir exists makedirs thumbnail ANTIALIAS size view len point3D_ids set join list format items name sort error print SceneManager load_images images is_hard_positive append range keys exists len join name min SceneManager load_images images sample is_hard_positive append keys len load_images SceneManager join format print get_num_images_of_cluster append listdir len print list zip join format size new min len rmtree paste save range exists pil_loader makedirs join format size new choice rmtree paste enumerate save exists pil_loader makedirs percentile join int format visualize visualize_test glob sort print len shuffle process_test_cluster process_trainval_cluster range get_clusters enumerate makedirs join lower format len format zeros_like print convert mean shape sqrt array enumerate len sort DataLoader ImagesFromList format determine_location extract_ss Graph print Variable add_observation Node extract_ms numpy cuda enumerate add_node format extract_ss plot print Variable numpy savefig extract_ms empty cuda fit_transform enumerate init_network extract_vectors cuda load_state_dict input load_unobserved_observation format image_loader Compose network_path eval Normalize listdir image_size multiscale load T save_unobserved_observation print meta_repr visualization len format print dot append range enumerate len extract_ss Variable generate_overlap DataLoader extract_ms generate_non_overlap zeros numpy cuda ImagesFromList join size new len paste save sample open descriptors_dir output_dir exists vis_dir generate base_dir n join min rmtree makedirs htime compute_map_and_print gpu_id whitening save_Lw_as whitenapply colon_image_dir parse_args load_Lw download_test whitenlearn get_data_root glob load_url network_offtheshelf startswith download_train time sort dot argsort configdataset numpy split join format print size min len makedirs copyfile new paste open save sample range enumerate split join items split save_score_path splitext validate TuplesDataset SGD pretrained DataLoader save_checkpoint arch seed test_datasets exp Adam epochs append range manual_seed_all val directory test lr resume manual_seed ExponentialLR training_dataset isfile train step update time format criterion model Variable backward print AverageMeter zero_grad apply create_epoch_tuples step cuda range enumerate len update time format criterion model Variable print AverageMeter eval create_epoch_tuples cuda range enumerate len htime compute_map_and_print extract_vectors whitenapply cuda whitenlearn get_data_root format Compose eval Normalize join time T print dot argsort configdataset test_whiten numpy split copyfile join save eval __name__ configure log_value whitening startswith max_pool2d size min tolist floor expand_as Tensor abs max range clamp size pow sqrt permute sum get_data_root children list format basename join print ImageRetrievalNet load_url load_state_dict startswith ReLU append Linear format extract_ss Variable print eval DataLoader extract_ms zeros cuda ImagesFromList enumerate len pow zeros upsample clone join format print len system mkdir range makedirs join format print len system mkdir range makedirs float arange len float arange len compute_ap max arange min zeros float sum array len format compute_map concatenate print around append range len round sha256 dot norm T eig inv mean dot sqrt diag T inv eig mean dot cholesky print zeros format len join basename format print copyfile call rmtree append exists enumerate makedirs square load_cameras save_images load_points3D save_points3D save_cameras SceneManager load_images execute float64 asarray execute shape uint8 ascontiguousarray execute asarray float64 shape get_pair_id execute uint32 execute shape asarray float32 asarray shape get_pair_id execute uint32 database_path rand initialize_tables add_keypoints add_matches exit connect next close execute add_image remove float32 dict add_camera blob_to_array randint norm cross_prod_matrix delete_images output_folder map SceneManager get_image_from_name iter save input_folder camera_id format ToQT Image image_to_idx FromQT t xrange append q format Image image_to_idx FromQT t xrange append normalize float q len interpolate_hermite sorted camera_id itervalues interpolate_linear column_stack array linspace len load_images save_camera_ply output_file scale array load_cameras open dense_folder describe image_filename tuple linspace fy width meshgrid cy imread height astype fromarrays cx fx uint8 reshape write dstack output_filename | # Colon Place Recognition Using Convolutional Neural Network This project trains a CNN for place recognition (image retrieval) task in colonoscopic images. All the commands should be excuted under the top-level directory (comp755project-ruibinma/) First, combine the sharded compressed files: ``` # Due to the size limit of github, the uploaded model and data are sharded cat pretrained_model/model.pth.tar.parta* > pretrained_model/model.pth.tar cat data/colonimages.tar.gz.parta* > data/colonimages.tar.gz # Extract colon images cd data | 916 |
RuochenFan/S4Net | ['instance segmentation', 'semantic segmentation'] | ['S4Net: Single Stage Salient-Instance Segmentation'] | lib/dataset_dev/utils/__init__.py lib/setup.py lib/detection_opr/ssd/ssd_bbox_transform.py lib/detection_opr/ssd/data_process/data_iter.py lib/datasets/roi_data_layer/layer.py lib/__init__.py lib/dataset_dev/dataset_factory.py lib/tf_utils/basemodel/__init__.py lib/datasets/factory.py lib/dataset_mx/utils/__init__.py lib/lib_kernel/lib_psroi_pooling/psroi_pooling_op.py experiment/make_saliency_dataset.py lib/detection_opr/rpn/__init__.py lib/detection_opr/rfcn_plus_plus/rfcn_plus_plus_opr.py lib/utils/__init__.py experiment/evaluation.py lib/datasets/lib_coco/PythonAPI/pycocotools/coco.py lib/detection_opr/utils/vis_det.py lib/datasets/roi_data_layer/__init__.py lib/detection_opr/utils/bbox_transform.py lib/dataset_mx/imdb.py lib/dataset_mx/pycocotools/setup_windows.py lib/dataset_mx/mask_transform.py lib/lib_kernel/lib_roi_align/roi_align_op_test.py experiment/train_multi_gpu.py lib/dataset_mx/roi_data_layer/prefetching_iter.py lib/detection_opr/rpn/proposal_top_layer.py lib/lib_kernel/lib_psalign_pooling/__init__.py lib/datasets/lib_coco/PythonAPI/pycocotools/cocoeval.py lib/dataset_mx/pycocotools/setup_linux.py lib/lib_kernel/lib_psroi_pooling/psroi_pooling_op_grad.py lib/lib_kernel/lib_psalign_pooling_ave/__init__.py lib/lib_kernel/lib_roi_align/roi_align_op_grad.py lib/detection_opr/ssd/data_process/rand_sampler.py experiment/anchor_target_layer.py experiment/__init__.py lib/lib_kernel/lib_roi_align/lzm_roi_align_op_test.py lib/lib_kernel/lib_psalign_pooling/psalign_pooling_op_test.py lib/datasets/roi_data_layer/roidb.py lib/dataset_mx/pycocotools/coco.py lib/datasets/combine_filter_roidb.py lib/lib_kernel/lib_roi_pooling/roi_pooling_op.py lib/detection_opr/utils/loss_opr_without_box_weight.py lib/lib_kernel/lib_roi_pooling/roi_pooling_op_grad.py lib/lib_kernel/lib_psalign_pooling/psalign_pooling_op.py lib/dataset_mx/cityscape.py experiment/resnet_v1.py lib/datasets/__init__.py lib/detection_opr/rfcn_plus_plus/__init__.py lib/detection_opr/rpn/proposal_target_layer.py lib/dataset_dev/coco.py lib/datasets/roi_data_layer/minibatch.py lib/datasets/roi_data_layer/prefetching_iter.py lib/dataset_mx/combine_filter_roidb.py lib/tf_utils/model_parallel.py lib/dataset_mx/utils/mask_voc2coco.py lib/lib_kernel/lib_roi_align/roi_align_op.py lib/dataset_dev/data_processing.py lib/datasets/ds_utils.py lib/utils/boxes_grid.py lib/dataset_dev/pascal_voc.py lib/utils/logger.py lib/datasets/lib_coco/PythonAPI/setup.py experiment/resnet_utils.py lib/dataset_mx/pycocotools/mask.py experiment/network_desp.py lib/datasets/imdb.py lib/detection_opr/rpn/anchor_target_layer_without_boxweight.py lib/lib_kernel/__init__.py lib/lib_kernel/lib_roi_pooling/roi_pooling_op_test.py lib/utils/nms.py lib/dataset_mx/coco.py lib/dataset_mx/ds_utils.py lib/datasets/pascal_voc.py lib/lib_kernel/lib_psroi_pooling/__init__.py experiment/config.py lib/tf_utils/basemodel/resnet_utils.py lib/dataset_mx/pascal_voc_eval.py lib/lib_kernel/lib_psroi_pooling/psroi_pooling_op_test.py lib/datasets/voc_eval.py lib/dataset_mx/pycocotools/__init__.py lib/dataset_mx/roi_data_layer/layer.py lib/detection_opr/utils/__init__.py lib/lib_kernel/lib_roi_align/__init__.py lib/tf_utils/__init__.py lib/detection_opr/utils/loss_opr.py lib/utils/timer.py lib/detection_opr/__init__.py lib/lib_kernel/lib_psalign_pooling_ave/psalign_pooling_op_test.py lib/lib_kernel/lib_psalign_pooling_ave/psalign_pooling_op.py lib/detection_opr/ssd/ssd_prior.py lib/datasets/tools/mcg_munge.py lib/detection_opr/common_opr/__init__.py lib/detection_opr/ssd/__init__.py lib/dataset_mx/roi_data_layer/__init__.py lib/dataset_mx/pycocotools/cocoeval.py lib/lib_kernel/lib_roi_pooling/__init__.py lib/dataset_mx/pascal_voc.py lib/detection_opr/common_opr/faster_rcnn_opr.py lib/dataset_mx/utils/tictoc.py lib/detection_opr/rpn/snippets.py experiment/test_seg.py lib/dataset_dev/__init__.py lib/dataset_mx/__init__.py lib/dataset_mx/utils/mask_coco2voc.py lib/detection_opr/rpn/anchor_target_layer.py experiment/dataset.py lib/detection_opr/rpn/proposal_layer.py lib/nms/py_cpu_nms.py lib/detection_opr/utils/nms_wrapper.py lib/datasets/lib_coco/PythonAPI/pycocotools/__init__.py lib/lib_kernel/lib_psalign_pooling/psalign_pooling_op_grad.py lib/tf_utils/basemodel/resnet_v1.py lib/datasets/lib_coco/PythonAPI/pycocotools/mask.py lib/dataset_mx/roi_data_layer/minibatch.py lib/lib_kernel/lib_psalign_pooling_ave/psalign_pooling_op_grad.py lib/utils/blob.py lib/tf_utils/debug_opr.py lib/datasets/coco.py lib/datasets/roi_data_layer/layer_prefetch.py lib/detection_opr/rpn/generate_anchors.py lib/detection_opr/ssd/ssd_gengt_layer.py _unmap _compute_targets anchor_target_layer add_path Dataset calc_accu_recall eval calc_iou list_colors get_bbox generate_seg_gt softmax_loss_ohem smooth_l1_loss_valid Network generate_bimasks generate_masks _smooth_l1_loss_base resnet_arg_scope proposal_without_nms_layer global_context_module Block conv2d_same subsample resnet_arg_scope stack_blocks_dense resnet_v1_152 resnet_v1_101 bottleneck resnet_v1_200 resnet_v1_50 resnet_v1 clip_boxes softmax parse_args test_net im_detect snapshot random_flip get_variables_in_checkpoint_file make_data train find_in_path customize_compiler_for_nvcc custom_build_ext locate_cuda coco _filter_crowd_proposals get_training_roidb combined_roidb filter_roidb unique_boxes xywh_to_xyxy validate_boxes xyxy_to_xywh filter_small_boxes get_imdb list_imdbs imdb pascal_voc parse_rec voc_eval voc_ap COCO Params COCOeval encode decode area toBbox RoIDataLayer RoIDataLayer get_minibatch _get_image_blob PrefetchingIter prepare_roidb coco _int64_feature _cat_id_to_real_id _bytes_feature get_imdb list_imdbs flip_gt_boxes resize_gt_boxes flip_image smallest_size_at_least sub_mean_pixel pascal_voc coco_results_one_category_kernel generate_cache_seg_inst_kernel coco combined_roidb load_gt_test_imdb filter_roidb merge_roidb load_gt_roidb load_gt_sdsdb combined_sdsdb unique_boxes filter_small_boxes IMDB get_flipped_entry_outclass_wrapper get_gt_masks intersect_box_mask mask_overlap PascalVOC voc_eval_sds voc_ap parse_inst parse_voc_rec voc_eval check_voc_sds_cache decode RoIDataLayer get_minibatch _get_image_blob PrefetchingIter decodeMask mask_coco2voc segToMask mask_voc2coco encodeMask toc tic reshape_layer crop_pool_layer row_column_max_pooling roifm_maxk_mask_layer roifm_maxk_mask global_context_module _unmap _compute_targets anchor_target_layer _unmap _compute_targets anchor_target_layer generate_anchors _scale_enum _whctrs _ratio_enum _mkanchors _filter_boxes proposal_layer proposal_without_nms_layer _get_bbox_regression_labels _compute_targets proposal_target_layer _sample_rois proposal_top_layer generate_anchors_pre clip_boxes bbox_transform bbox_transform_inv ssd_gengt_layer make_anchors_func DataIter RandCropper RandSampler RandPadder clip_boxes bbox_transform bbox_transform_inv softmax_loss_ohem _get_mask_of_label smooth_l1_loss_valid debug_two smooth_l1_loss smooth_l1_loss_ohem debug_four sum_ohem_loss _smooth_l1_loss_base debug_single softmax_layer softmax_loss_ohem smooth_l1_loss_rcnn smooth_l1_loss_ohem smooth_l1_loss_rpn sum_ohem_loss _smooth_l1_loss_base nms soft_nms visualize_detection visualize_detection_old _psalign_pool_grad _psalign_pool_grad _psroi_pool_grad conv2d weight_variable _roi_align_grad conv2d weight_variable _roi_pool_grad conv2d weight_variable py_cpu_nms _debug_four _debug_single _debug_three _debug_two average_gradients sum_gradients Block conv2d_same subsample resnet_arg_scope stack_blocks_dense resnet_v1_152 resnet_v1_101 bottleneck resnet_v1_200 resnet_v1_50 resnet_v1 resnet_v1_block im_list_to_blob prep_im_for_blob get_boxes_grid QuickLogger nms Timer sum RPN_POSITIVE_WEIGHT ones reshape _compute_targets RPN_BBOX_INSIDE_WEIGHTS ascontiguousarray maximum empty array fill zeros argmax bbox_overlaps fill empty insert sum range len list sort calc_iou astype copy calc_accu_recall int32 append zeros auc keys range enumerate len append all range sum range len conv2d random_normal_initializer gather reduce_sum top_k sparse_softmax_cross_entropy_with_logits to_float pow stop_gradient less abs reduce_sum _smooth_l1_loss_base astype ascontiguousarray round int32 zeros bool argmax bbox_overlaps range len minimum clip_boxes reshape bbox_transform_inv flatten RPN_PRE_NMS_TOP_N array zeros range int copy zeros range int copy pad add_argument ArgumentParser minimum maximum exp toc int show max clip_boxes reshape astype float32 shape stack tic softmax resize zeros range array image_size run imwrite Saver resize forward Session seed show restore dtype placeholder get_test_collection tic append inference input get_logger Network im_detect range format average_time astype eval info ConfigProto toc join uint8 deepcopy RNG_SEED reshape float32 rectangle zeros Dataset weights_addr len NewCheckpointReader get_variable_to_shape_map join get_state format print save uint8 concatenate astype float32 stack resize zeros image_size range image_size copy Dataset Network pathsep pjoin exists split find_in_path items pjoin pathsep dirname sep append _compile compiler_so iou toarray csr_matrix xyxy_to_xywh enumerate print format len print append_flipped_images USE_FLIPPED set_proposal_method PROPOSAL_METHOD get_training_roidb get_imdb extend imdb classes append split dot array unique int parse findall text append find arange concatenate size maximum sum max range parse_rec cumsum argmax max sum range eps format astype mkdir float enumerate minimum join print sort maximum voc_ap argsort zeros bool array len shape USE_ALL_GT _get_image_blob randint empty array len prep_im_for_blob MAX_SIZE astype float32 PIXEL_MEANS append im_list_to_blob imread range len image_index toarray roidb argmax max range image_path_at len to_float convert_to_tensor to_float minimum to_int32 maximum greater cond range split print clip_boxes astype extend shape mask_voc2coco float enumerate embed len join mask_coco2voc append_flipped_images gt_roidb append_flipped_images gt_sdsdb dataset image_set_test DATA_DIR dataset_path extend merge_roidb filter_roidb merge_roidb filter_roidb zeros astype range resize zeros min max sum min max int parse findall text dict append find parse_voc_rec iteritems eps mask_overlap cumsum astype float32 maximum voc_ap argsort xrange resize append zeros float check_voc_sds_cache enumerate len join reshape min delete where unique xrange append zeros max open join format print len parse_inst mkdir append enumerate astype get_gt_masks polygon zeros clip len zeros range len zeros decodeMask enumerate segToMask flatten shape logical_xor append len encodeMask_c astype float32 resize zeros range len time time shape avg_pool2d max_pool2d transpose shape range zeros _unmap RPN_FG_FRACTION transpose RPN_BATCHSIZE choice int RPN_CLOBBER_POSITIVES astype float32 max vstack _ratio_enum array hstack sqrt _whctrs round _mkanchors _whctrs _mkanchors decode nms RPN_POST_NMS_TOP_N clip_boxes reshape _filter_boxes hstack bbox_transform_inv RPN_NMS_THRESH RPN_PRE_NMS_TOP_N zeros RPN_MIN_SIZE decode _filter_boxes RPN_MIN_SIZE FG_FRACTION ones USE_GT _sample_rois hstack reshape astype float32 vstack zeros round zeros BBOX_INSIDE_WEIGHTS int shape BBOX_NORMALIZE_STDS bbox_transform BBOX_NORMALIZE_MEANS BBOX_NORMALIZE_TARGETS_PRECOMPUTED array _get_bbox_regression_labels size min _compute_targets ascontiguousarray choice bbox_overlaps argmax max append RPN_TOP_N clip_boxes reshape hstack bbox_transform_inv choice zeros generate_anchors arange reshape transpose astype float32 int32 meshgrid transpose log transpose exp where neg_overlap neg_pos_ratio bbox_overlaps argmax max exp overlap_threshold append sum do_neg_mining range bbox_transform astype ascontiguousarray fill empty image_size int float32 array len clip sizes sqrt ratios append feat_shapes empty array range enumerate len dtype astype shape zeros shape softmax reshape reduce_mean reduce_sum _smooth_l1_loss_base gather reduce_sum top_k _smooth_l1_loss_base minimum to_float reduce_sum sparse_softmax_cross_entropy_with_logits top_k _smooth_l1_loss_base stop_gradient gather embed embed embed to_float equal _get_mask_of_label maximum to_float greater reduce_sum where maximum _smooth_l1_loss_base stop_gradient gather equal to_float one_hot reshape greater reduce_sum _smooth_l1_loss_base stop_gradient one_hot reshape greater float32 uint8 cpu_soft_nms ascontiguousarray show int str format text astype add_patch dict imshow Rectangle int str format putText astype FONT_HERSHEY_SIMPLEX waitKey dict imshow rectangle array psalign_pool_grad get_attr psroi_pool_grad get_attr truncated_normal get_attr roi_align_grad get_attr roi_pool_grad append maximum minimum print shape embed embed embed embed concat reduce_mean zip append expand_dims concat reduce_sum zip append expand_dims zeros max range len min astype float32 shape resize float max arange reshape transpose hstack SPATIAL_SCALE dstack ASPECTS sqrt repeat floor KERNEL_SIZE meshgrid zeros SCALES max range len append maximum minimum | ## Dataset Preparation You can download the dataset in pickle format from https://drive.google.com/open?id=1-Yn_9GMjeu-d8gLZ26t3bvH6yX_BMFfm. Or run make_saliency_dataset.py to prepare the dataset by yourself. The pickle file should be placed in ./data directory. ## Test Our pretrained weights can be found in https://drive.google.com/open?id=1TeJw415uNGwmiOT1v5iLIxcGNFWGriW4, you can unzip it and place it into ./logs. Simply run: ``` cd experiment python3 test_seg.py ``` ## Train | 917 |
RuojinCai/ShapeGF | ['point cloud generation'] | ['Learning Gradient Fields for Shape Generation'] | evaluation/pytorch_structural_losses/match_cost.py models/generators/mlp_gen.py trainers/lgan_trainer_3D.py models/discriminators/mlp_dis.py datasets/single_shape_datasets.py models/decoders/resnet_add.py evaluation/pytorch_structural_losses/nn_distance.py evaluation/pytorch_structural_losses/__init__.py models/encoders/constant_encoder.py train.py evaluation/pytorch_structural_losses/setup.py datasets/pointflow_datasets.py models/decoders/resnet_cbn.py trainers/base_trainer.py train_multi_gpus.py test.py evaluation/evaluation_metrics.py trainers/utils/vis_utils.py models/encoders/l3dp_encoder.py trainers/ae_trainer_3D.py trainers/utils/utils.py trainers/utils/gan_losses.py main_worker get_args get_args main_worker main init_np_seed reduce_tensor ShapeNet15kPointClouds get_data_loaders get_datasets Uniform15KPC init_np_seed get_data_loaders as_mesh get_datasets SingleShape init_np_seed _pairwise_EMD_CD_ jensen_shannon_divergence _jsdiv EMD_CD lgan_mmd_cov knn compute_all_metrics jsd_between_point_cloud_sets distChamferCUDA unit_cube_grid_point_cloud entropy_of_occupancy_grid emd_approx distChamfer MatchCostFunction NNDistanceFunction ResnetBlockConv1d Decoder CResnetBlockConv1d Decoder CBatchNorm1d Discriminator Encoder Encoder Generator truncated_normal score_matching_loss Trainer BaseTrainer Trainer dis_loss gen_loss gradient_penalty dis_acc ground_truth_reconstruct ground_truth_reconstruct_multi set_random_seed get_opt ground_truth_field get_prior visualize_point_clouds_3d_scan visualize_point_clouds_3d visualize_field get_grid visualize_procedure visualize_point_clouds_2d_overlay config log_dir dict2namespace add_argument strftime copy2 ArgumentParser parse_args makedirs data validate print multi_gpu_wrapper get_data_loaders Trainer pretrained distributed import_module resume pprint save_dir type initial_seed seed all_reduce clone get_world_size int SummaryWriter format init_process_group gpu int get_args spawn print sync_bn device_count getattr ShapeNet15kPointClouds DataLoader get_datasets Scene tuple isinstance concatenate val train SingleShape float match_cost bmm size transpose expand_as long min mean distChamferCUDA emd_approx append range distChamfer cat view size min contiguous expand tqdm distChamfer distChamferCUDA append emd_approx range cat update topk float size sqrt index_select to range cat to size min mean float update _pairwise_EMD_CD_ print lgan_mmd_cov knn t ndarray reshape float32 float range reshape squeeze fit float warn unit_cube_grid_point_cloud unique zeros kneighbors len _jsdiv sum entropy warn sum squeeze add_ copy_ shape normal_ mean score_net randn_like view size float sum mean size float sum mean reshape size int ExponentialLR StepLR LambdaLR Adam SGD lr getattr float seed manual_seed_all manual_seed norm exp view size sum max set_title transpose set_xlim add_subplot draw close scatter set_zlim figure _renderer zip array enumerate set_ylim len set_title transpose set_xlim add_subplot draw close scatter set_zlim figure _renderer zip array enumerate set_ylim len arange to size expand meshgrid float set_title transpose draw add_subplot close scatter savefig figure _renderer array enumerate len set_clim add_subplot set_ticks max set_aspect transpose title scatter contourf quiver sum range close ScalarMappable sqrt set_array int print reshape min draw figure _renderer numpy array append range visualize_point_clouds_3d concatenate | # Learning Gradient Fields for Shape Generation This repository contains a PyTorch implementation of the paper: [*Learning Gradient Fields for Shape Generation*](http://www.cs.cornell.edu/~ruojin/ShapeGF/) [[Project page]](http://www.cs.cornell.edu/~ruojin/ShapeGF/) [[Arxiv]](https://arxiv.org/abs/2008.06520) [[Short-video]](https://www.youtube.com/watch?v=HQTbtFzDYAU) [[Long-video]](https://www.youtube.com/watch?v=xCCdnzt7NPA) [Ruojin Cai*](http://www.cs.cornell.edu/~ruojin/), [Guandao Yang*](https://www.guandaoyang.com/), [Hadar Averbuch-Elor](http://www.cs.cornell.edu/~hadarelor/), | 918 |
RussellTsuchida/ELU_GELU_kernels | ['gaussian processes'] | ['Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite Networks'] | code/experiments/02_shallow_experiments/gp_mlp.py code/experiments/empirical_kernels_gelu.py code/experiments/02_shallow_experiments/benchmark_plotter.py code/kernels/nn_kernel_lrelu.py code/experiments/02_shallow_experiments/toy_example.py code/mvn_mixture/diag_mvn_mixture.py code/experiments/data/datasets.py code/kernels/nn_kernel_abs.py code/experiments/02_shallow_experiments/utils.py code/networks/mlp_sample_animator.py code/experiments/data/other_datasets.py code/kernels/nn_kernel_trig.py code/experiments/02_shallow_experiments/module_gp_gelu.py code/experiments/03_deep_experiments/grid_iteration.py code/experiments/02_shallow_experiments/DataGen.py code/kernels/nn_kernel_linear.py code/kernels/nn_kernel_elu.py code/networks/np_mlp.py code/experiments/experiment_array.py code/experiments/activation_plotter.py code/experiments/plotters.py code/experiments/03_deep_experiments/plot_performance.py code/kernels/nn_kernel.py code/networks/np_initialisers.py code/dynamics/dynamics.py code/experiments/02_shallow_experiments/benchmark_experiment.py code/experiments/empirical_kernels_elu.py code/experiments/03_deep_experiments/experiment_array.py code/kernels/nn_kernel_relu.py code/kernels/nn_kernel_gelu.py code/kernels/nn_kernel_erf.py sigma_star lambda_3_elu sigma_star_plot norm_preserving_s lambda_3_gelu ExperimentArray plot_posterior_samples plot_gp_errors plot_samples plot_proposal plot_2d_samples plot_gp plot_sequence plot_likelihood plot_legend_only DataGenerator gp_model gp_model compare_dist metrics_calc gauss_neg_log_like print_w_time get_time try_plot plot_2d_grid plot_1d_grid ExperimentArray plot_and_return_rmse Concrete Boston Yacht Wine Energy Naval Kin8nm Protein Power Dataset load_or_generate_1d_regression load_or_generate_smooth_xor load_or_generate_snelson load_or_generate_x_xdash load_or_generate_xor load_or_generate_hypersphere NNKernel NNKernelAbs NNKernelElu NNKernelErf NNKernelGelu NNKernelLinear NNKernelLRelu NNKernelRelu NNKernelTrig DiagMVNMixture MlpGPAnimator GeneralisedNormal RceInitialiser create_initialiser StudentT Uniform Laplace Mlp number zeros_like plot xlabel sigma_star ylabel savefig figure linspace legend gca tick_params enumerate brentq sigma_star cos pi sqrt sin arcsin exp multivariate_normal cos pi cdf list plot figlegend viridis savefig Normalize figure legend gca range len set_aspect reshape axis pcolormesh gca savefig unique meshgrid plot xlabel set_yticklabels ylabel FormatStrFormatter viridis savefig Normalize figure set_major_formatter gca fill_between legend range tick_params number plot xlabel ylabel savefig figure legend gca tick_params xticks set_ylim yticks set_aspect xlabel tripcolor close ylabel flatten imshow ylim xticks savefig figure xlim abs yticks close savefig plot quantiles int plot axis DiagMVNMixture close mean flatten savefig figure fill_between range plot xlabel close ylabel savefig figure legend xticks range set_ylim yticks set_aspect get_proposal close imshow savefig abs print reshape str gauss_neg_log_like print name_ square mean sqrt square log name_ plot_1d_grid plot_2d_grid show T plot concatenate set_yticklabels set_xticklabels set_yticks set_xlim add_subplot set_xticks savefig figure fill set_ylim show set_xlabel set_xlim add_subplot view_init scatter plot_trisurf figure set_ylabel set_zlabel set_zlim set_ylim enumerate nanmin all arange print plot_samples where load_array zeros range plot_legend_only load load load save load save genfromtxt load reshape argsort save load save | # ELU_GELU_kernels This code accompanies the paper 'Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite Networks' A link to a permanent repository will be added to the paper upon acceptance. ## Installing dependencies We recommend you use a virtual environment. Inside your environment, type `pip install --user --requirement requirements.txt` You might need to compile a piece of Fortran code into a Python module. If you receive errors when trying to run the commands below, follow the directions in `code/kernels/fortran_module/instructions.txt` | 919 |
RyanWu2233/Style_Transfer | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | Gatys_Style_Transfer.py show_result deprocess_image gram_matrix transfer_net main content_loss get_vgg total_variation_loss transfer_step preprocess_image style_loss expand_dims preprocess_input img_to_array load_img shape astype reshape dot transpose batch_flatten permute_dimensions gram_matrix square VGG19 Input reshape constant_initializer show deprocess_image imshow R_model numpy trainable_variables apply_gradients gradient zip time format print save_img log10 transfer_step show_result range | ## Neural Style Transfer - using TensorFlow 2.1   TensorFlow 2.1 implementation for Leon A.Gatys neural style transfer algorithm  ---- ## Neural style transfer algorithm: Gaty's neural style transfer algorithm use VGG19 to extract feature space for both content image and style image. Content loss is computed by estimating mean squared error at content layer (`conv5_1`) between content and style image. Style loss is computed by computing Gram matrix at style layers (`conv5_1`,`conv4_1`,`conv3_1`,`conv2_1`). Total loss combined style loss and content loss with specific content weighting and style weighting. Style can be transfered by minimizing total loss.  | 920 |
SAIL-GuoLab/Cell_Segmentation_and_Tracking | ['cell segmentation'] | ['Segmentation with Residual Attention U-Net and an Edge-Enhancement Approach Preserves Cell Shape Features'] | Cell_segmentation/Prediction_only/Visualization.py Cell_segmentation/Train_n_Testing/deep_learning_model/.ipynb_checkpoints/solver-Copy1-checkpoint.py Cell_segmentation/Prediction_only/deep_learning_model/network.py Cell_segmentation/Train_n_Testing/Image_preprocessing.py Cell_segmentation/Prediction_only/deep_learning_model/.ipynb_checkpoints/solver-checkpoint.py Cell_segmentation/Train_n_Testing/Visualization.py Cell_segmentation/Prediction_only/deep_learning_model/data_loader.py Cell_segmentation/Train_n_Testing/label_preprocessing.py Cell_segmentation/Prediction_only/deep_learning_model/.ipynb_checkpoints/data_loader-checkpoint.py Cell_segmentation/Prediction_only/deep_learning_model/.ipynb_checkpoints/solver-Copy1-checkpoint.py Cell_segmentation/Train_n_Testing/deep_learning_model/.ipynb_checkpoints/misc-checkpoint.py Cell_segmentation/Train_n_Testing/deep_learning_model/.ipynb_checkpoints/data_loader-checkpoint.py Cell_segmentation/Prediction_only/misc_functions.py Cell_segmentation/Train_n_Testing/deep_learning_model/.ipynb_checkpoints/network-checkpoint.py Cell_segmentation/Prediction_only/Image_preprocessing.py Cell_segmentation/Train_n_Testing/deep_learning_model/solver.py Cell_segmentation/Prediction_only/deep_learning_model/evaluation.py Cell_segmentation/Prediction_only/deep_learning_model/.ipynb_checkpoints/misc-checkpoint.py Cell_segmentation/Train_n_Testing/deep_learning_model/misc.py Cell_segmentation/Prediction_only/deep_learning_model/misc.py Cell_segmentation/Prediction_only/deep_learning_model/.ipynb_checkpoints/evaluation-checkpoint.py Cell_segmentation/Train_n_Testing/deep_learning_model/network.py Cell_segmentation/Train_n_Testing/Convert_images_to_multiframe_nifti.py Cell_segmentation/Train_n_Testing/deep_learning_model/.ipynb_checkpoints/evaluation-checkpoint.py Cell_segmentation/Train_n_Testing/deep_learning_model/.ipynb_checkpoints/solver-checkpoint.py Cell_segmentation/Train_n_Testing/deep_learning_model/evaluation.py Cell_segmentation/Train_n_Testing/misc_functions.py Cell_segmentation/Prediction_only/deep_learning_model/.ipynb_checkpoints/network-checkpoint.py Cell_segmentation/Prediction_only/deep_learning_model/solver.py Cell_segmentation/Train_n_Testing/deep_learning_model/data_loader.py find_sub_list printProgressBar plot_img plot_img_and_hist rescaling config ImageFolder get_loader get_MSE CrossEntropy printProgressBar conv_block resconv_block U_Net up_conv init_weights ResAttU_Net Attention_block Solver ImageFolder get_loader get_MSE CrossEntropy printProgressBar conv_block resconv_block AttU_Net R2AttU_Net Recurrent_block U_Net up_conv init_weights R2U_Net single_conv ResAttU_Net RRCNN_block Attention_block Solver recoverPatches config ImageFolder get_loader get_MSE CrossEntropy printProgressBar conv_block resconv_block AttU_Net R2AttU_Net Recurrent_block U_Net up_conv init_weights R2U_Net single_conv ResAttU_Net RRCNN_block Attention_block Solver ImageFolder get_loader get_MSE CrossEntropy printProgressBar conv_block resconv_block AttU_Net R2AttU_Net Recurrent_block U_Net up_conv init_weights R2U_Net single_conv ResAttU_Net RRCNN_block Attention_block Solver str index append range len cumulative_distribution rescale_intensity type plot cumulative_distribution set_axis_off img_as_float set_xlabel set_xlim set_yticks imshow hist twinx ticklabel_format set_xticks ravel subplot axis imshow title imread range print int float format int int ImageFolder DataLoader sum len zeros shape print apply batch_size zeros imshow axis range print float len | # Cell Segmentation and Tracking *By Nanyan "Rosalie" Zhu and Chen "Raphael" Liu, Columbia University, 2019* #### Please cite [our paper](https://arxiv.org/pdf/2001.05548.pdf) if you find this useful. ## Overview The purpose of this repository is to design and implement a tool to quantify the cellular behaviors at the single-cell level. We decided to approach this challenge by dissecting it into two stages: cell segmentation and cell tracking. The quantification of cellular properties/behaviors is a natural outcome as the cell tracking stage is complete. ## Repository Hierarchy ``` Cell_Segmentation_and_Tracking ├── Cell_Segmentation │ ├── Prediction_only | 921 |
SBU-BMI/quip_cancer_segmentation | ['breast cancer detection'] | ['Utilizing Automated Breast Cancer Detection to Identify Spatial Distributions of Tumor Infiltrating Lymphocytes in Invasive Breast Cancer'] | patch_extraction_cancer_40X/save_svs_to_tiles.py training/train_cancer_cnn_Resnet_pretrained.py prediction/common/shape.py training/tumor_utils.py prediction/common/ch_inner_prod.py training/resnet.py prediction/tumor_pred/tumor_utils.py prediction/tumor_pred/pred.py heatmap_gen/gen_json_multipleheat.py prediction/common/__init__.py prediction/color/color_stats.py prediction/common/batch_norms.py derive_info_from_pred_file extract_patch whiteness redness blackness split_validation load_data main batch_nmsp SoftThresPerc ResponseNormal BatchNormSparseLayer batch_norm BatchNormLayer BNRectifyThres BNRectify BNLeakyRectify BNRectifyPerc ChInnerProdMerge ChInnerProd CenterCrop FlattenLayer PadLayer ReshapeLayer DimshuffleLayer val_fn_epoch_on_disk iterate_minibatches softmax_np whiteness confusion_matrix mean_std parallelize_model load_data unparallelize_model auc_roc from_output_to_pred get_mean_and_std baseline_vgg16 MyResNet data_loader load_data_folder data_loader_multi_res shuffle_data parallelize_model load_data_split baseline_inception_v3 unparallelize_model cvt_to_gpu get_sizes FCNN net_frozen baseline_resnet load_imgs_files conv1x1 ResNet ResNet_4 resnet50 Bottleneck resnet34_4 resnet152 conv3x3 resnet34 resnet18 BasicBlock resnet101 train_model val_fn_epoch confusion_matrix mean_std main auc_roc data_loader_noisy data_loader_visualize get_mean_and_std get_mean_and_std_batch data_loader load_data_folder parallelize_model shuffle_data load_data_split unparallelize_model cvt_to_gpu HASHI net_frozen weights_normal_init load_imgs_files loadtxt sort unique range len format ANTIALIAS save resize array read_region flatten std mean mean convert whiteness redness blackness int32 zeros float listdir array range format write close load_data range open split_validation BatchNormLayer NonlinearityLayer getattr identity NonlinearityLayer getattr identity BatchNormSparseLayer sum exp max slice range len fromarray data_aug print astype astype int32 copy time format print reshape numpy load_data zeros listdir len sum roc_curve is_available to DataParallel module print array append load_data_folder append permutation len format print shuffle_data load_data_split len print DataLoader div_ zeros range len cuda print Adam SGD frozen_until load_url ResNet load_state_dict load_url ResNet load_state_dict ResNet_4 load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict hamming_loss f1_score accuracy_score empty data model batch_size zero_grad SGD net_type save dataset max str val_fn_epoch strftime color to range format eval lr mkdir item auc_roc flush enumerate deepcopy time criterion backward print Variable confusion_matrix parameters train step len train_model basename setrecursionlimit print in_features parameters DataParallel resnet34 to Linear join format print mean DataLoader append numpy std enumerate flush len fill_ isinstance Conv2d normal_ modules Linear | # quip_cancer_segmentation This repo is for training and testing brca cancer detection pipeline using 3 standard CNNs: VGG16, Resnet-34, and Inception-v4. More details are found in the paper: [Utilizing Automated Breast Cancer Detection to Identify Spatial Distributions of Tumor Infiltrating Lymphocytes in Invasive Breast Cancer](https://arxiv.org/abs/1905.10841) NOTE: download the trained models [here](https://stonybrookmedicine.box.com/shared/static/1hdfb06lgd08xfbpoly9tjp6c6i665nz.zip), extract the model files to data/models_cnn The default settings are for Resnet-34 since it performs the best on the public testset. To use other models, change the variable "MODEL" in conf/variables.sh to other models name downloaded from google drive above. # Dependencies - [Pytorch 0.4.0](http://pytorch.org/) - Torchvision 0.2.0 - cv2 (3.4.1) - [Openslide 1.1.1](https://openslide.org/api/python/) | 922 |
SINGROUP/dscribe | ['formation energy'] | ['DScribe: Library of Descriptors for Machine Learning in Materials Science'] | examples/sparse_output.py dscribe/__init__.py examples/acsf.py examples/ewaldsummatrix.py docs/src/conf.py regtests/test_ewaldsummatrix.py examples/lmbtr.py dscribe/descriptors/acsf.py dscribe/core/system.py regtests/derivatives.py examples/sinematrix.py dscribe/descriptors/valleoganov.py docs/src/assets/soap_weighting_plot.py dscribe/kernels/__init__.py regtests/testrunner.py dscribe/utils/stats.py examples/kernels/averagekernel.py regtests/testbaseclass.py regtests/lmbtr.py dscribe/utils/dimensionality.py examples/clustering/training.py dscribe/descriptors/__init__.py examples/aseatoms.py dscribe/kernels/localsimilaritykernel.py dscribe/descriptors/soap.py dscribe/descriptors/descriptor.py examples/forces_and_energies/training_pytorch.py dscribe/kernels/rematchkernel.py examples/readme.py regtests/mbtr.py perftests/perftests.py dscribe/core/lattice.py dscribe/descriptors/elementaldistribution.py examples/clustering/dataset.py regtests/runexamples.py examples/forces_and_energies/training_tensorflow.py regtests/test_sinematrix.py regtests/conftest.py examples/valleoganov.py examples/basics.py regtests/test_coulombmatrix.py dscribe/kernels/averagekernel.py regtests/kernels.py regtests/generaltests.py examples/forces_and_energies/analysis.py dscribe/descriptors/coulombmatrix.py dscribe/descriptors/lmbtr.py examples/forces_and_energies/dataset.py dscribe/core/__init__.py regtests/acsf.py examples/coulombmatrix.py dscribe/utils/species.py examples/mbtr.py regtests/elementaldistribution.py dscribe/descriptors/sinematrix.py dscribe/descriptors/matrixdescriptor.py applyheader.py setup.py dscribe/descriptors/ewaldsummatrix.py dscribe/descriptors/mbtr.py dscribe/utils/geometry.py examples/kernels/rematchkernel.py regtests/soap.py examples/soap.py regtests/testutils.py regtests/test_valle_oganov.py dscribe/utils/__init__.py match_func crawl using_clang get_pybind_include on_config_inited setup Lattice System ACSF CoulombMatrix Descriptor ElementalDistribution EwaldSumMatrix LMBTR MatrixDescriptor MBTR SineMatrix SOAP ValleOganov AverageKernel LocalSimilarityKernel REMatchKernel is2d is1d get_extended_system get_adjacency_list get_adjacency_matrix symbols_to_numbers get_atomic_numbers system_stats energy_force_loss FFNet train_model make_model calc_gradients force_energy_loss calc_val_loss step soap_gto_vs_polynomial soap_derivatives soap_sparse_vs_dense soap_cartesian_vs_imaginary ACSFTests cutoff assert_sparse assert_derivatives_include big_system assert_symmetry_permutation assert_derivatives_numerical assert_symmetry_rotation assert_symmetry_translation bulk_system H2O assert_matrix_descriptor_sorted assert_no_system_modification assert_matrix_descriptor_random assert_matrix_descriptor_eigenspectrum assert_matrix_descriptor_flatten assert_derivatives_exclude assert_symmetries assert_derivatives assert_parallellization water assert_matrix_descriptor_exceptions SoapDerivativeTests SoapDerivativeComparisonTests ElementalDistributionTests SpeciesTests DescriptorTests GeometryTests DistanceTests GaussianTests ASETests REMatchKernelTests AverageKernelTests LMBTRTests MBTRTests ExampleTests SoapTests TestBaseClass get_soap_gto_lmax_setup integral get_soap_default_setup save_poly_coefficients load_ewald soap_integration load_gto_coefficients save_gto_coefficients get_ewald_sum_matrix_default_setup coefficients_gto calculate_ewald get_ewald_sum_matrix_automatic_setup get_soap_polynomial_lmax_setup coefficients_polynomial save_ewald get_weights load_polynomial_coefficients test_parallellization test_number_of_features test_matrix_descriptor_eigenspectrum test_no_system_modification test_symmetries test_performance coulomb_matrix test_matrix_descriptor_exceptions test_matrix_descriptor_sorted cm_python test_derivatives test_periodicity test_features test_matrix_descriptor_flatten test_matrix_descriptor_random test_sparse test_parallellization test_number_of_features test_matrix_descriptor_eigenspectrum test_no_system_modification test_unit_cells test_create test_symmetries test_a_independence test_matrix_descriptor_exceptions test_matrix_descriptor_sorted ewald_sum_matrix test_matrix_descriptor_flatten test_matrix_descriptor_random test_electrostatics test_sparse test_parallellization test_number_of_features test_matrix_descriptor_eigenspectrum test_no_system_modification test_unit_cells test_symmetries sine_matrix test_matrix_descriptor_exceptions test_matrix_descriptor_sorted test_features test_matrix_descriptor_flatten test_matrix_descriptor_random test_sparse test_parallellization test_number_of_features test_no_system_modification valle_oganov test_symmetries test_exceptions test_flatten test_vs_mbtr test_sparse join walk match_func new_compiler customize_compiler format getoutput add_js_file connect add_css_file version cKDTree sparse_distance_matrix row tocoo col zip append arange vstack cdist get_positions ceil append get_cell range asarray product concatenate get_pbc tile Atoms norm cross dot get_scaled_positions numbers any get_atomic_numbers array len get append sorted list all hasattr isinstance set symbols_to_numbers any array triu_indices isinstance min get_distance_matrix get_pbc set get_chemical_symbols any get_atomic_numbers union from_atoms len mean Dense Sequential add reduce_mean gradient calc_gradients format print step shuffle calc_val_loss save numpy range len show time format create set_title plot set_xlabel add_subplot copy tqdm SOAP set_ylabel figure legend append range len show time format set_title plot set_xlabel add_subplot copy derivatives tqdm SOAP set_ylabel figure legend append range len show time format set_title plot _cartesian set_xlabel add_subplot copy tqdm SOAP set_ylabel figure legend append create_single range len show time format set_title plot set_xlabel add_subplot copy derivatives tqdm SOAP set_ylabel figure legend append range len assert_symmetry_rotation assert_symmetry_translation assert_symmetry_permutation create copy rotate descriptor_func water abs max create copy translate descriptor_func water abs max create descriptor_func water abs max assert_derivatives_numerical assert_derivatives_include assert_derivatives_exclude derivatives molecule descriptor_func derivatives molecule descriptor_func create set_positions get_number_of_features copy descriptor_func big_system get_positions derivatives zeros range len create water create get_number_of_features todense array check_modifications water bulk water create water create water norm float create water create norm sqrt cdf Atoms range Atoms RandomState arange astype meshgrid Atoms RandomState arange astype meshgrid Atoms water water get func ones get sorted list create reshape get_positions set _alphas SOAP _betas get_atomic_numbers get_chemical_symbols len get sorted list inv get_positions set sqrtm zeros get_atomic_numbers range get_chemical_symbols len get sorted list get_positions set flatten zip append zeros get_atomic_numbers range get_chemical_symbols enumerate len get tplquad pi get_soap_gto_lmax_setup coefficients_gto format save format coefficients_polynomial get_soap_polynomial save EwaldSummation Atoms get_positions total_energy add_oxidation_state_by_site Structure zeros get_atomic_numbers range len calculate_ewald format save normal norm fill_diagonal RandomState eigvalsh get_positions absolute argsort flatten zeros get_atomic_numbers len assert_matrix_descriptor_exceptions assert_matrix_descriptor_flatten assert_matrix_descriptor_sorted assert_matrix_descriptor_eigenspectrum assert_matrix_descriptor_random assert_parallellization assert_no_system_modification assert_sparse coulomb_matrix assert_symmetries assert_derivatives coulomb_matrix CoulombMatrix create get_number_of_features CoulombMatrix cm_python norm create CoulombMatrix get_positions create time big_system cm_python range len ewald_sum_matrix EwaldSumMatrix get_number_of_features EwaldSumMatrix create water EwaldSumMatrix create water enumerate create epsilon_0 setup load_ewald EwaldSumMatrix e pi shape zeros range len create EwaldSumMatrix set_pbc water set_cell sine_matrix SineMatrix Atoms SineMatrix get_atomic_numbers diag enumerate SineMatrix valle_oganov ValleOganov len create get_number_of_features set ValleOganov water get_atomic_numbers len create ValleOganov water MBTR | <img src="https://raw.githubusercontent.com/SINGROUP/dscribe/master/logo/dscribe_logo.png" width="400"> [](https://dev.azure.com/laurihimanen/DScribe%20CI/_build/latest?definitionId=1&branchName=master) [](https://coveralls.io/github/SINGROUP/dscribe?branch=master) [](https://github.com/psf/black) DScribe is a Python package for transforming atomic structures into fixed-size numerical fingerprints. These fingerprints are often called "descriptors" and they can be used in various tasks, including machine learning, visualization, similarity analysis, etc. # Homepage For more details and tutorials, visit the homepage at: | 923 |
SINTEFMedtek/FAST-Pathology | ['whole slide images', 'semantic segmentation'] | ['Hybrid guiding: A multi-resolution refinement approach for semantic segmentation of gigapixel histopathological images'] | studies/tumour-study/print_node_names.py studies/tumour-study/test_read_png_result.py studies/inference-study/statistical_analysis.py studies/tumour-study/statistical_analysis.py | <div align="center"> <img src="data/Icons/fastpathology_logo.png" width="128"> <h1 align="center">FastPathology</h1> <h3 align="center">Open-source software for deep learning-based digital pathology</h3> [](https://github.com/SINTEFMedtek/FAST-Pathology/releases) [](https://opensource.org/licenses/BSD-2-Clause)    | 924 |
SITE5039/AdaMixUp | ['data augmentation'] | ['MixUp as Locally Linear Out-Of-Manifold Regularization'] | GitHub.AdaMixup.AAAI.MNIST.Cifar.py preactivation_block get_data_ResNet get_data_adaMixup Mnist ResNet_Cifar_baseline extract_labels extract_images _read32 ResNet_Cifar_AdaMixup maybe_download join download format info newbyteorder Conv2D BNReLU Cifar10 Mnist MapData AugmentImageComponent BatchData Cifar10 Mnist MapData AugmentImageComponent BatchData | # AdaMixUp This repo contains the code for AdaMixUp, as presented in the paper "MixUp as Locally Linear Out-of-Manifold Regularization", by Hongyu Guo, Yongyi Mao and Richong Zhang, AAAI 2019 (https://arxiv.org/abs/1809.02499) Requirements and Installation: 1. A computer running macOS or Linux 2. Tensorpack: https://github.com/tensorpack/tensorpack 3. Python version 2.7 4. A Tensorflow installation Training: $ CUDA_VISIBLE_DEVICES=0 python GitHub.AdaMixup.AAAI.MNIST.Cifar.py Acknowledgement: | 925 |
SJYuCNEL/Matrix-based-Dependence | ['outlier detection'] | ['Measuring Dependence with Matrix-based Entropy Functional'] | bike_sharing/dataloader.py bike_sharing/loss.py bike_sharing/model.py MI.py loss.py loss_fn GaussianKernelMatrix pairwise_distances HSIC renyi_entropy calculate_MI joint_entropy calculate_gram_mat RMSELoss calculate_MI GaussianKernelMatrix pairwise_distances HSIC joint_entropy loss_fn calculate_gram_mat reyi_entropy LinearRegression reshape pairwise_distances ones shape trace cuda eye mm GaussianKernelMatrix calculate_MI one_hot criterion view HSIC MSELoss renyi_entropy mean softmax CrossEntropyLoss t reshape mm view abs log2 trace sum calculate_gram_mat sum mul log2 trace abs calculate_gram_mat renyi_entropy joint_entropy max pairwise_distances abs log2 trace sum calculate_gram_mat reyi_entropy L1Loss sqrt reyi_entropy | # Matrix-based-Dependence code of "Measuring the Dependence with Matrix-based Entropy Functional" in AAAI 2021 | 926 |
SPOClab-ca/word-class-flexibility | ['word embeddings'] | ['Word class flexibility: A deep contextualized approach'] | scripts/multilingual_bert_contextual.py notebooks/ELMoPlusBERT.py src/semantic_embedding.py notebooks/MultilingualUD.py notebooks/old/WALSCorrelation.py src/corpus.py src/old/partial.py scripts/run_mturk_correlations.py notebooks/ExtractBert.py scripts/old/simulate_variability.py scripts/old/entropy_based_variation.py notebooks/BasicUD.py tests/test_semantic_embedding.py scripts/old/bram_thesis_clean.py scripts/process_bnc.py notebooks/old/FactorsOfFlexibility.py src/const.py notebooks/old/MergeLemma.py scripts/old/partial_correlation.py notebooks/old/CompareEmbeddings.py scripts/process_wikipedia.py convert_to_bert_input process_ud_language get_name_for_group iterate_words process_line run_model process_language simulate_many gen_vectors do_simulation group_treebanks_by_language POSCorpus InvalidLemmaException SemanticEmbedding calculate_partial_correlation WordPieceMatchTest max print sort_values create_from_ud get_lemma_stats_merge_method sum len sentences set_title SemanticEmbedding print init_bert write mean_score apply clf savefig boxplot nv_cosine_similarity print create_from_pickle apply get_per_lemma_stats sort_values len mean sum gen_vectors range defaultdict replace glob append listdir arange coef_ intercept_ delete dot zeros LinearRegression enumerate values fit | # Word Class Flexibility This repository contains the source code and data for our EMNLP 2020 paper: ["*Word class flexibility: A deep contextualized approach*"](https://arxiv.org/abs/2009.09241) by Bai Li, Guillaume Thomas, Yang Xu, and Frank Rudzicz. **Abstract**: Word class flexibility refers to the phenomenon whereby a single word form is used across different grammatical categories. Extensive work in linguistic typology has sought to characterize word class flexibility across languages, but quantifying this phenomenon accurately and at scale has been fraught with difficulties. We propose a principled methodology to explore regularity in word class flexibility. Our method builds on recent work in contextualized word embeddings to quantify semantic shift between word classes (e.g., noun-to-verb, verb-to-noun), and we apply this method to 37 languages. We find that contextualized embeddings not only capture human judgment of class variation within words in English, but also uncover shared tendencies in class flexibility across languages. Specifically, we find greater semantic variation when flexible lemmas are used in their dominant word class, supporting the view that word class flexibility is a directional process. Our work highlights the utility of deep contextualized models in linguistic typology. ## Citation Please cite this work if you find it useful for your research. Li, B., Thomas, G., Xu, Y., and Rudzicz, F. (2020) Word class flexibility: A deep contextualized approach. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)*. ``` @inproceedings{li2020wordclass, title={Word class flexibility: A deep contextualized approach}, author={Li, Bai and Thomas, Guillaume and Xu, Yang and Rudzicz, Frank}, | 927 |
SRPOL-AUI/storir | ['response generation', 'data augmentation', 'speech enhancement'] | ['StoRIR: Stochastic Room Impulse Response Generation for Audio Data Augmentation'] | storir/__init__.py storir/ir.py setup.py storir/rir_utils.py storir/audio_utils.py example.py decibels_to_gain peak_norm ImpulseResponse thin_out_reflections calculate_drr_energy_ratio sum log10 int choice len | # StoRIR: Stochastic Room Impulse Response Generation for Audio Data Augmentation This is an installable package containing code described in [StoRIR paper](https://arxiv.org/abs/2008.07231) presented on *Interspeech 2020*. For the example of use see [example.py file](example.py). Contact point: - Piotr Masztalski ([email protected]) Original authors: - Piotr Masztalski ([email protected]) - Mateusz Matuszewski ([email protected]) - Karol Piaskowski ([email protected]) - Michal Romaniuk ([email protected]) | 928 |
SSAW14/LearnableDilationNetwork | ['semantic segmentation'] | ['Learning Dilation Factors for Semantic Segmentation of Street Scenes'] | caffe-dynamic-dilation/python/caffe/detector.py caffe-dynamic-dilation/python/caffe/test/test_solver.py caffe-dynamic-dilation/tools/extra/parse_log.py caffe-dynamic-dilation/python/caffe/test/test_io.py caffe-dynamic-dilation/python/detect.py caffe-dynamic-dilation/scripts/copy_notebook.py caffe-dynamic-dilation/tools/extra/resize_and_crop_images.py caffe-dynamic-dilation/python/caffe/test/test_net.py caffe-dynamic-dilation/python/caffe/net_spec.py caffe-dynamic-dilation/python/caffe/test/test_python_layer.py caffe-dynamic-dilation/tools/extra/summarize.py caffe-dynamic-dilation/python/draw_net.py caffe-dynamic-dilation/examples/finetune_flickr_style/assemble_data.py caffe-dynamic-dilation/examples/pycaffe/layers/pyloss.py caffe-dynamic-dilation/src/caffe/test/test_data/generate_sample_data.py caffe-dynamic-dilation/scripts/download_model_binary.py caffe-dynamic-dilation/python/caffe/__init__.py caffe-dynamic-dilation/examples/pycaffe/caffenet.py caffe-dynamic-dilation/python/caffe/pycaffe.py caffe-dynamic-dilation/scripts/cpp_lint.py caffe-dynamic-dilation/examples/web_demo/app.py caffe-dynamic-dilation/python/caffe/test/test_python_layer_with_param_str.py caffe-dynamic-dilation/python/caffe/draw.py caffe-dynamic-dilation/python/classify.py caffe-dynamic-dilation/python/caffe/test/test_net_spec.py caffe-dynamic-dilation/python/caffe/classifier.py caffe-dynamic-dilation/tools/extra/extract_seconds.py caffe-dynamic-dilation/python/caffe/io.py caffe-dynamic-dilation/python/caffe/test/test_layer_type_list.py caffe-dynamic-dilation/examples/web_demo/exifutil.py download_image make_net max_pool caffenet conv_relu fc_relu EuclideanLossLayer start_tornado start_from_terminal embed_image_html classify_upload index allowed_file ImagenetClassifier classify_url open_oriented_im apply_orientation main main main parse_args Classifier Detector get_edge_label draw_net get_layer_label get_pydot_graph choose_color_by_layertype get_pooling_types_dict draw_net_to_file Transformer blobproto_to_array datum_to_array array_to_blobproto arraylist_to_blobprotovecor_str array_to_datum resize_image blobprotovector_str_to_arraylist load_image oversample Layers Function Parameters Top NetSpec assign_proto param_name_dict to_proto _Net_blobs _Net_forward_all _Net_set_input_arrays _Net_backward _Net_params _Net_forward _Net_IdNameWrapper _Net_outputs _Net_forward_backward_all _Net_blob_loss_weights _Net_batch _Net_inputs TestBlobProtoToArray TestLayerTypeList simple_net_file TestNet lenet TestNetSpec silent_net anon_lenet exception_net_file parameter_net_file SimpleLayer TestPythonLayer ParameterLayer python_net_file ExceptionLayer SimpleParamLayer TestLayerWithParam python_param_net_file TestSolver ParseNolintSuppressions CheckVlogArguments CheckSectionSpacing FindNextMultiLineCommentEnd ReplaceAll CheckForFunctionLengths _SetOutputFormat _IsTestFilename _VerboseLevel CheckBraces RemoveMultiLineComments ResetNolintSuppressions CheckForNonStandardConstructs _SetVerboseLevel PrintUsage _NestingState CheckIncludeLine CheckAccess _CppLintState Search CheckInvalidIncrement RemoveMultiLineCommentsFromRange CleansedLines CheckForBadCharacters UpdateIncludeState FindPreviousMatchingAngleBracket CheckEmptyBlockBody FindNextMultiLineCommentStart Match _NamespaceInfo CheckMakePairUsesDeduction CheckCheck IsBlankLine _SetFilters ProcessLine _FunctionState CheckPosixThreading GetLineWidth GetHeaderGuardCPPVariable IsCppString _IncludeState CheckSpacing _ClassInfo CheckForCopyright IsErrorSuppressedByNolint ProcessFileData CheckForMultilineCommentsAndStrings CloseExpression _PreprocessorInfo _OutputFormat CheckForIncludeWhatYouUse CheckSpacingForFunctionCall FindEndOfExpressionInLine FindNextMatchingAngleBracket _SetCountingStyle ProcessFile _IncludeError CleanseRawStrings CheckAltTokens CheckForNewlineAtEOF ParseArguments CheckForNonConstReference PrintCategories _Filters main FilesBelongToSameModule CheckCStyleCast FileInfo _BlockInfo CheckForHeaderGuard CheckCaffeDataLayerSetUp ReverseCloseExpression CleanseComments _DropCommonSuffixes _ClassifyInclude CheckStyle CheckCaffeAlternatives FindStartOfExpressionInLine _ShouldPrintError CheckComment Error _GetTextInside CheckLanguage CheckCaffeRandom GetPreviousNonBlankLine reporthook parse_readme_frontmatter model_checks_out valid_dirname get_start_time extract_seconds extract_datetime_from_line get_log_created_year imread urlretrieve Convolution InnerProduct Data SoftmaxWithLoss LRN Accuracy max_pool InnerProduct conv_relu fc_relu Dropout get read info load_image classify_image StringIO join replace info secure_filename save filename open_oriented_im classify_image fromarray replace astype save resize StringIO iteritems listen HTTPServer format print start WSGIContainer update start_tornado add_option OptionParser debug port parse_args ImagenetClassifier forward run hasattr _getexif astype float32 tile apply_orientation open transpose model_def endswith ArgumentParser save mean_file channel_swap output_file dirname expanduser parse_args input_file predict Classifier set_mode_cpu load time isdir print add_argument set_mode_gpu pretrained_model gpu len DataFrame Detector format to_hdf detect_selective_search mean set_index to_csv detect_windows read_csv add_argument ArgumentParser read NetParameter output_image_file rankdir Merge draw_net_to_file items DESCRIPTOR batch_size str num_output get_pooling_types_dict add_edge get_edge_label Dot get_layer_label values name choose_color_by_layertype Edge Node bottom append type layer add_node top data array diff shape BlobProto extend flat extend BlobProtoVector ParseFromString BlobProtoVector extend tostring shape Datum flat data len astype float32 tile zoom tuple resize fill empty array concatenate shape tile empty array LayerParameter NetParameter _to_proto extend Counter OrderedDict values iteritems hasattr isinstance extend add getattr setattr iteritems layers index set outputs _forward len iteritems _backward layers inputs index set len iteritems asarray extend copy next _batch forward len iteritems izip_longest asarray backward extend copy next _batch zip forward len ascontiguousarray concatenate zeros next range len NamedTemporaryFile str close write data Pooling pool1 conv2 pool2 ip1 relu1 SoftmaxWithLoss Convolution NetSpec DummyData ip2 ReLU InnerProduct label conv1 Pooling SoftmaxWithLoss Convolution DummyData ReLU InnerProduct data NetSpec DummyData Silence data2 error search add group clear compile compile compile SetOutputFormat SetCountingStyle SetFilters _Filters startswith IsErrorSuppressedByNolint _ShouldPrintError write IncrementErrorCount replace append Match group find startswith endswith range error FindNextMultiLineCommentEnd RemoveMultiLineCommentsFromRange FindNextMultiLineCommentStart rstrip find xrange len FindEndOfExpressionInLine xrange len FindStartOfExpressionInLine error min search I xrange len FileInfo RepositoryName sep sub ParseNolintSuppressions error startswith split GetHeaderGuardCPPVariable enumerate error enumerate error len error replace count error find error find error find error find error Search error match InnermostClass replace error escape Match Search error group Search Check error lines Count End group Begin xrange NumLines Match raw_lines Search error match group error Match group pop group append Search pop group append Search elided replace CheckSpacingForFunctionCall rfind error len group min CloseExpression NumLines sub xrange find CheckComment Match Search lines_without_raw_strings error group starting_linenum Match range Search error rfind len group ReverseCloseExpression Search Match CloseExpression find error Match CloseExpression find elided error strip group FindEndOfExpressionInLine xrange find Match CloseExpression len error Match finditer normalize isinstance GetLineWidth int InnermostClass CheckCheck error CheckAltTokens CheckBraces CheckSpacing CheckSectionSpacing CheckEmptyBlockBody CheckAccess GetHeaderGuardCPPVariable lines_without_raw_strings _DropCommonSuffixes RepositoryName match split CheckNextIncludeOrder CanonicalizeAlphabeticalOrder FileInfo error search group SetLastHeader match _ClassifyInclude Match pop end search set itervalues append M rstrip replace CheckCStyleCast error _GetTextInside CheckIncludeLine search group lstrip startswith Match ResetSection Search split rfind error group ReverseCloseExpression lstrip xrange findall Match Search ReplaceAll error Match Search endswith replace setdefault group search CleanseComments open FilesBelongToSameModule error search copy sub xrange NumLines FullName keys error search CheckPosixThreading ParseNolintSuppressions CheckVlogArguments CheckMakePairUsesDeduction CheckCaffeDataLayerSetUp CheckLanguage CheckInvalidIncrement CheckCaffeRandom CheckForNonConstReference check_fn Update CheckForNonStandardConstructs CheckStyle raw_lines CheckForMultilineCommentsAndStrings CheckCaffeAlternatives CheckForFunctionLengths CleansedLines _NestingState CheckForBadCharacters CheckForNewlineAtEOF _IncludeState NumLines RemoveMultiLineComments CheckForCopyright ResetNolintSuppressions CheckForHeaderGuard xrange CheckCompletedBlocks CheckForIncludeWhatYouUse ProcessLine _FunctionState Error rstrip endswith len write ProcessFileData _SetVerboseLevel range split write exit join write exit _VerboseLevel int getopt _SetOutputFormat set _SetVerboseLevel PrintCategories _SetFilters _OutputFormat PrintUsage _SetCountingStyle split getreader ParseArguments ResetErrorCounts stderr exit verbose_level PrintErrorCounts StreamReaderWriter ProcessFile getwriter int time write flush load join index int rfind datetime split getctime year strip extract_datetime_from_line get_start_time total_seconds strip write get_log_created_year close extract_datetime_from_line open | # LearnableDilationNetwork Source code for "Learning Dilation Factors for Semantic Segmentation of Street Scenes (GCPR2017)" We modified the convolution layer in standard Caffe deep learning framework. This is a demo to train a Deeplab-LargeFOV semantic segmentation model with our learnable dilated convolution networks. ## Prerequisites - Linux or OSX. - Python 2 or Python 3. - CPU or NVIDIA GPU + CUDA CuDNN. ## Compilation of Caffe Please follow the instruction from http://caffe.berkeleyvision.org/installation.html to compile our modified Caffe. | 929 |
SSAW14/segmentation_membership_inference | ['semantic segmentation'] | ['Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation'] | models/__init__.py models/customize.py attack.py models/resnet.py SLM Label2Tensor test Argmax main View PyramidPooling Normalize ConcurrentModule Sum Mean GramMatrix GlobalAvgPool2d ResNet resnet50 Bottleneck resnet152 conv3x3 resnet34 get_resnet resnet18 BasicBlock resnet101 get_resnet_classification_model load format concatenate print set_device test get_resnet_classification_model resume load_state_dict array arch parse_args input_channel cuda gpu roc_auc_score shape zeros unique argsort zeros range shape shape range zeros model argmax cuda open shape Argmax append normal SLM Label2Tensor concatenate eval listdir load join gauss sort reshape repeat randint numpy array gpu len load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict resnet50 resnet34 resnet18 resnet152 resnet101 | # Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation by Yang He, Shadi Rahimian, Bernt Schiele and Mario Fritz, ECCV2020 (https://arxiv.org/abs/1912.09685). **Note**: The current software is tested with PyTorch 0.4.1 and Python 3.6. ## Example ### Download We provide example data and model weights at https://drive.google.com/drive/folders/1A4WBp5qxS8rn_EnbCY7H5VArtsErS8pv. Download the required files and run ```bash unzip examples.zip && weights.zip ``` | 930 |
SSusantAchary/FaceDetection-counting-using-MTCNN | ['face detection', 'face alignment'] | ['Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks'] | MTCNNDetector.py renameSort.py main str sorted time imwrite sort len write close range detect_face rectangle create_mtcnn imread listdir Session open | ## Apology for depreciated code, will update with working algorithm # MTCNN-TensorFlow - Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks based Counting Face Detection is an important problem statement, the idea is to detect face in an image containing people. original work can be reached at below link: https://arxiv.org/pdf/1604.02878.pdf ##################################### What essentially MTCNN algorithm does is : -First Stage: produces candidate(possible features which refer to a face)windows quickly through a shallow CNN. -Second Stage: refines the windows to reject a large number of non-faces windows through a more complex CNN. -Third Stage: a more powerful CNN to refine the result and output facial landmarks positions. | 931 |
STOR-i/GaussianProcesses.jl | ['gaussian processes'] | ['GaussianProcesses.jl: A Nonparametric Bayes package for the Julia Language'] | perf/benchmarks/benchmark_GPy.py perf/benchmarks/benchmark GPflow.py | # GaussianProcesses.jl [](https://travis-ci.org/STOR-i/GaussianProcesses.jl) [](https://ci.appveyor.com/project/STOR-i/gaussianprocesses-jl) [](https://coveralls.io/github/STOR-i/GaussianProcesses.jl?branch=master) [](https://codecov.io/gh/STOR-i/GaussianProcesses.jl) [](http://STOR-i.github.io/GaussianProcesses.jl/latest) A Gaussian Processes package for Julia. This package is still under development. If you have any suggestions to improve the package, or if you've noticed a bug, then please post an [issue](https://github.com/STOR-i/GaussianProcesses.jl/issues/new) for us and we'll get to it as quickly as we can. Pull requests are also welcome. ## Citing GaussianProcesses.jl To cite GaussianProcesses.jl, please reference the [paper](http://statistik-jstat.uibk.ac.at/article/view/v102i01). Sample Bibtex is given below: | 932 |
SUNCAT-Center/CatLearn | ['active learning'] | ['An Atomistic Machine Learning Package for Surface Science and Catalysis'] | catlearn/ga/initialize.py test/test_neighborlist.py test/test_suite.py tutorials/11_NEB/00_Tutorial/example_script_qe.py catlearn/preprocess/feature_engineering.py catlearn/learning_curve/pltfile.py tutorials/11_NEB/05_terrace_pt/nebterrace.py catlearn/preprocess/importance_testing.py catlearn/preprocess/clean_data.py catlearn/api/ase_data_setup.py catlearn/optimize/tools.py tutorials/05_toy_model_gradients/2D_toy_model.py catlearn/utilities/utilities.py catlearn/utilities/__init__.py setup/git-clean.py test/test_mlneb.py catlearn/learning_curve/__init__.py test/test_acquisition.py tutorials/11_NEB/03_Heptamer_Island/neb_heptamer_island.py catlearn/regression/gpfunctions/io.py test/test_mlmin.py tutorials/11_NEB/00_Vasp_example/Example-VASP.py catlearn/optimize/mlmin.py catlearn/featurize/site_stability.py catlearn/ga/mating.py tutorials/12_MLMin/2_Cluster/min_cluster.py catlearn/preprocess/feature_elimination.py catlearn/active_learning/__init__.py test/test_1_feature_generation.py setup.py catlearn/utilities/sammon.py test/test_predict.py catlearn/active_learning/algorithm.py test/test_feature_optimization.py tutorials/11_NEB/01_MullerBrown/neb_mullerbrown.py catlearn/featurize/slab_utilities.py catlearn/regression/gpfunctions/covariance.py catlearn/regression/gpfunctions/hyperparameter_scaling.py test/test_ads_fp_gen.py tutorials/03_linear_models/linear_models.py catlearn/cross_validation/hierarchy_cv.py catlearn/fingerprint/bulk.py catlearn/regression/gpfunctions/sensitivity.py catlearn/ga/__init__.py docs/conf.py catlearn/regression/gpfunctions/log_marginal_likelihood.py tutorials/10_feature_selection/genetic_algorithm.py catlearn/utilities/database_functions.py test/test_chalcogenides.py catlearn/regression/__init__.py tutorials/11_NEB/02_Diffusion_Au_atom/neb_Au_diffusion.py catlearn/optimize/constraints.py tutorials/09_bulk_fingerprints/Voronoi_FP.py catlearn/featurize/base.py test/common.py catlearn/featurize/adsorbate_prep.py catlearn/fingerprint/adsorbate.py test/test_gradients.py catlearn/estimator/general_preprocess.py catlearn/fingerprint/prototype.py catlearn/fingerprint/voro.py tutorials/06_kernel_parameters/linear_kernel_parameters.py catlearn/active_learning/acquisition_functions.py tutorials/11_NEB/04_CO_Cu111/nebCO.py test/test_bulk_fp_gen.py tutorials/test_notebooks.py test/test_functions.py catlearn/api/networkx_graph_api.py catlearn/fingerprint/particle.py catlearn/utilities/clustering.py catlearn/learning_curve/learning_curve.py test/test_learning_curve.py catlearn/optimize/mlneb.py catlearn/cross_validation/__init__.py catlearn/featurize/neighbor_matrix.py tutorials/08_organic_molecules/from_neighborlist.py catlearn/learning_curve/feature_selection.py catlearn/preprocess/scaling.py catlearn/featurize/asap_wrapper.py tutorials/05_toy_model_gradients/toy_model_acquisition.py tutorials/06_kernel_parameters/gaussian_kernel_parameters.py catlearn/fingerprint/convoluted.py catlearn/optimize/io.py catlearn/regression/cost_function.py catlearn/regression/gpfunctions/kernel_scaling.py test/test_lml_optimizer.py test/test_feature_base.py catlearn/__init__.py tutorials/12_MLMin/3_GPAW/min_gpaw.py catlearn/utilities/distribution.py catlearn/cross_validation/k_fold_cv.py catlearn/regression/gpfunctions/kernels.py tutorials/01_toy_model/toy_model.py catlearn/api/ase_atoms_api.py catlearn/ga/algorithm.py catlearn/regression/gpfunctions/uncertainty.py catlearn/fingerprint/chalcogenide.py catlearn/utilities/penalty_functions.py catlearn/estimator/general_kernel.py catlearn/ga/predictors.py test/test_autocorrelation.py test/test_ga.py test/test_voronoi.py test/test_scale.py catlearn/utilities/neighborlist.py catlearn/preprocess/feature_extraction.py test/test_validation.py test/test_api.py catlearn/ga/io.py catlearn/ga/mutate.py catlearn/featurize/setup.py catlearn/regression/ridge_regression.py catlearn/optimize/functions_calc.py tutorials/04_uncertainties/uncertainties.py tutorials/07_cross_validation/hierarchy_cv.py test/test_data_clean.py tutorials/12_MLMin/4_FixBondLength/min_constraint.py catlearn/learning_curve/data_process.py tutorials/12_MLMin/1_Heptamer_island/min_heptamer.py tutorials/07_cross_validation/k_fold_cv.py catlearn/fingerprint/graph.py catlearn/learning_curve/placeholder.py catlearn/fingerprint/molecule.py catlearn/api/catmap.py catlearn/featurize/periodic_table_data.py catlearn/preprocess/greedy_elimination.py tutorials/08_organic_molecules/from_coordinates.py tutorials/11_NEB/06_gpaw/neb_Au_diffusion.py test/test_io.py tutorials/05_toy_model_gradients/toy_model.py catlearn/fingerprint/catapp.py catlearn/regression/gpfunctions/kernel_setup.py catlearn/ga/natural_selection.py catlearn/fingerprint/standard.py catlearn/estimator/general_gp.py catlearn/regression/scikit_wrapper.py catlearn/regression/gaussian_process.py catlearn/regression/gpfunctions/default_scale.py catlearn/ga/convergence.py parse_requirements probability_density random_acquisition EI PI optimistic_proximity rank UCB optimistic cluster proximity classify _test_acquisition ActiveLearning extend_atoms_class set_features images_connectivity get_neighborlist set_neighborlist get_features get_graph images_pair_distances set_graph _initialize_catlearn database_to_list get_train get_unique catmap_energy_landscape catmap_pickle get_rate_control matrix_to_nl ase_to_networkx networkx_to_adjacency Hierarchy write_split read_split k_fold GeneralGaussianProcess default_lengthscale general_kernel smooth_kernel GeneralPrepreprocess info2primary_index layers_termination layers2ads_index check_reconstructions detect_adsorbate constraints_termination slab_positions2ads_index termination_info connectivity_termination attach_cations auto_layers connectivity2ads_index catalysis_hub_to_info z2ads_index sym2ads_index detect_termination tags_termination ads_index tags2ads_index compare_slab_connectivity last2ads_index slab_index formula2ads_index autogen_info ptm_alloy_fpv ptm_structure_fpv BaseGenerator check_labels _generalized_matrix _element_list connection_dict neighbor_features _get_features connection_matrix _get_neighborlist _heteroatomic_matrix property_matrix _get_periodic_neighborlist stat_mendeleev_params make_labels get_mendeleev_params n_outer list_mendeleev_params default_catlearn_radius get_radius default_fingerprinters FeatureGenerator Material update_site_file SiteFeaturizer composition_name get_df GAFeatureSelection get_site_index traj_to_reference_dict unique_set stoichiometry is_oxide is_metal slab_layers AdsorbateFingerprintGenerator BulkFingerprintGenerator CatappFingerprintGenerator ChalcogenideFingerprintGenerator check_length ConvolutedFingerprintGenerator GraphFingerprintGenerator AutoCorrelationFingerprintGenerator ParticleFingerprintGenerator PrototypeFingerprintGenerator PrototypeSites StandardFingerprintGenerator VoronoiFingerprintGenerator GeneticAlgorithm _cross_validate Convergence initialize_population _write_data read_data cut_and_splice random_permutation probability_remove probability_include population_reduction remove_duplicates minimize_error_descriptors minimize_error_time minimize_error data_process feature_selection feature_frequency hierarchy _single_test LearningCurve placeholder originalplot violinplot featselect_featvar_plot apply_mask unmask_geometry create_mask GoldsteinPrice Rosenbrock ModifiedHimmelblau MultiModal NoiseHimmelblau MullerBrown Himmelblau print_info array_to_ase print_version print_cite_mlneb print_time print_cite_mlmin ase_to_catlearn print_info_neb store_trajectory_neb store_results_neb array_to_atoms converged MLMin get_fmax fit create_ml_neb get_energy_catlearn get_fmax ASECalc train_gp_model get_forces_catlearn get_results_predicted_path MLNEB eval_and_append plotneb clean_variance clean_skewness remove_outliers clean_infinite FeatureScreening get_labels_ablog _separate_list generate_positive_features get_ablog generate_features get_order_2ab get_order_2 get_labels_order_2 _decode_key single_transform get_div_order_2 get_labels_order_2ab pca spca catlearn_pca pls _single_elimination GreedyElimination feature_invariance _predictor feature_randomize ImportanceElimination feature_shuffle target_normalize unit_length target_center min_max target_standardize standardize normalize _get_percentiles get_error _cost_function GaussianProcess RidgeRegression RegressionFit get_covariance ScaleData hyperparameters rescale_hyperparameters read read_train_data _kernel_list_to_group write _load_dict_from_group _load_kernel_list_from_group _dict_to_group write_train_data constant_multi_kernel linear_kernel quadratic_kernel laplacian_dk_dwidth laplacian_kernel constant_kernel noise_multi_kernel quadratic_dk_dslope scaled_sqe_kernel AA_kernel quadratic_dk_ddegree gaussian_kernel gaussian_dk_dwidth sqe_kernel gaussian_xx_gradients gaussian_xxp_gradients _laplacian_kernel_scale _constant_kernel_scale kernel_scaling _gaussian_kernel_scale _AA_kernel_scale _quadratic_kernel_scale _linear_kernel_scale kdict2list _noise_multi_setup _get_theta kdicts2list list2kdict _scaling_setup _constant_multi_setup _gaussian_setup _quadratic_setup _constant_setup prepare_kernels _laplacian_setup log_marginal_likelihood dK_dtheta_j SensitivityAnalysis get_uncertainty cluster_features _cluster_split FingerprintDB DescriptorDatabase pair_deviation pair_distribution _distance_hist ase_connectivity _neighbor_iterator ase_neighborlist catlearn_neighborlist PenaltyFunctions sammons_error target_correlation holdout_set formal_charges geometry_hash delete_branch get_merged_branches get_data TestFeatureGeneration _surrogate_model TestAcquisition classifier TestAdsorbateFeatures TestEnergyLandscape TestAPI TestAutoCorrelation TestBulkFeatures TestChalcogenides TestDataClean TestBaseGenerator TestFeatureOptimization test_predict prediction train_predict ConfigTestCase TestGeneticAlgorithm TestGaussianKernel TestIO TestCurve lml_test lml_opt scale_test get_data lml_plotter TestMLMin TestMLNEB TestNeighborList TestPrediction TestScaling TestHyperparameterScaling setup_suite TestValidation predict TestVoronoiFeatures test_notebooks afunc afunc afunc au afunc2d afunc lineary afunc lineary afunc plot afunc plot afunc gp_predict rr_predict gp_predict parallel_plot rr_predict parallel_plot fitf ravel abs ravel append cluster_features Counter enumerate probability_density isinstance EI min PI UCB optimistic cluster max update probability_density defaultdict asarray isinstance EI min PI len classifier optimistic UCB append zeros max enumerate max list permutation RandomState arange surrogate_model delete append array range len int connectivity connect id select get_atoms append float array ctime connectivity ase_connectivity warn tqdm sum get_all_distances array tqdm MethodType _initialize_catlearn _initialize_catlearn _initialize_catlearn _initialize_catlearn _initialize_catlearn _initialize_catlearn catlearn list defaultdict shuffle append enumerate list defaultdict shuffle append enumerate get str join update _get_adsorbate_fields connect site enumerate load open split list format add_edges_from extend_atoms_class Graph get_neighborlist ase_neighborlist add_nodes_from get_atomic_numbers type range add_node len list format asarray type to_numpy_matrix values update fill_diagonal astype shape range array_split concatenate shuffle shape append append tolist append asarray default_lengthscale std default_lengthscale info2primary_index images_connectivity tqdm detect_termination detect_adsorbate slab_index enumerate detect_termination tqdm int join str tags_termination len layers_termination warn connectivity_termination constraints_termination split append compare_slab_connectivity enumerate connectivity string2symbols sort unique str string2symbols str sort min get_positions get_distances argsort unique count sorted symbols2numbers string2symbols list warn range len string2symbols int auto_layers string2symbols len list print connectivity warn unique append str sorted list warn unique auto_layers len count_nonzero list fill_diagonal copy unique append max enumerate count_nonzero list fill_diagonal copy unique append max enumerate average get_layers PTM PTM str len _generalized_matrix connection_matrix property_matrix get_atomic_numbers sum _get_neighborlist _get_periodic_neighborlist append range len max _get_neighborlist _get_periodic_neighborlist get set append get_atomic_numbers len norm defaultdict asarray index position append defaultdict get_pbc get_mic_distance index position append get_cell append append len append range len _element_list _heteroatomic_matrix set append sum array len isalpha int int get_mendeleev_params len append array enumerate isdigit get_mendeleev_params len warn nan islower array enumerate append isupper get_mendeleev_params get_radius append join list keys items list read listdir get_potential_energy rename read_csv drop load join Material dump cohesive_energy atoms get_df strftime apply material_image_index open float enumerate split append any get_positions list from_iterable list count get_chemical_symbols set stoichiometry var list stoichiometry is_oxide sorted len set mean sqrt labels_ zip append sum range fit str len pop deepcopy concatenate fit_func zeros range arange randint choice zeros range array concatenate deepcopy randint len deepcopy random randint range len deepcopy random range append sort asarray tolist delete unique append round range len GaussianProcess predict GaussianProcess predict time GaussianProcess predict int split_index str predict_subsets hstack placeholder reversed data_process globalscaledata save target_normalize globalscaling log load_split int split_index getstats placeholder data_process globalscaledata target_normalize globalscaling log load_split model show str set_title xlabel ylabel figure legend pointplot show str errorbar set_title plot xlabel ylabel scatter get_statistic legend figure show xlabel add_subplot ylabel figure legend violinplot pointplot ones_like positions flatten argwhere range len flatten zeros range len asarray range copy len get_number_of_atoms reshape get_potential_energy flatten constraints append range len reshape list zip reshape max_abs_forces int tab_neb format print_time energy_backward parprint mean round array iter append energy_forward max range format print_time list_max_abs_forces list_targets parprint round range len uncertainty_path e sfit savetxt zeros efit s images write now parprint parprint parprint print GaussianProcess print_cite_mlmin list_fmax get_fmax parprint array filename abs max reshape sqrt zeros sum max set_positions set_calculator get_positions ASECalc copy append range set_constraint std GaussianProcess copy parprint optimize_hyperparameters kernel_list max len get_total_energy images NEBTools append predict array_to_ase set_calculator get_potential_energy num_atoms Atoms ase_ini flatten list_targets reshape get_energy_catlearn list_gradients append array list_train show read subplots errorbar plot view print set_xlabel tight_layout dict set_ylabel savefig get_fit NEBTools annotate append max defaultdict masked_less median abs masked_greater defaultdict asarray copy nanstd SimpleImputer defaultdict asarray all list reshape mean transform array fit_transform defaultdict asarray copy skew concatenate abs min shape range zeros shape range zeros true_divide append range len shape range zeros append str range len min range shape zeros abs log append str range len append format range len join list sort set append split list append combinations_with_replacement range len list generate_positive_features len set intersection append _decode_key split transform PLSRegression fit transform PCA fit SparsePCA transform fit svd defaultdict reshape transpose dot standardize clean_variance append range len isinstance delete predict transform test_predict isinstance mean std copy concatenate concatenate random copy mean std shuffle copy place defaultdict std concatenate mean array place defaultdict concatenate min mean max place defaultdict concatenate min max transpose defaultdict place norm mean defaultdict asarray std defaultdict asarray min mean max mean defaultdict asarray place defaultdict _get_percentiles asarray format square mean sqrt nanmean median abs max log percentile get_covariance inv reduce list2kdict dot exp format copy shape eval zeros kdict2list regularization train_target kernel_list train_fp write_train_data read_train_data GaussianProcess _kernel_list_to_group File format create_dataset asarray format File _load_kernel_list_from_group float str enumerate _dict_to_group append items _load_dict_from_group keys items isinstance items value isinstance Group Dataset ones shape exp zeros T exp ones shape zeros shape exp zeros identity exp fill_diagonal cdist pdist array squareform T reshape identity outer shape zeros range shape range zeros pdist fill_diagonal squareform exp fill_diagonal cdist pdist squareform cdist vstack len cdist vstack fill_diagonal cdist pdist array squareform exp fill_diagonal cdist pdist array squareform abs pdist fill_diagonal squareform print eval format print format print format print format print format print format _scaling_setup eval format float format float format float format str _get_theta len list len append concatenate kdict2list list float len T get_covariance reshape pi einsum list2kdict dot dK_dtheta_j eye cho_solve sum log cho_factor len format laplacian_dk_dwidth concatenate ones len quadratic_dk_dslope shape eval gaussian_dk_dwidth append quadratic_dk_ddegree kdict2list get_covariance dot sqrt diagonal einsum norm defaultdict kmeans2 _cluster_split zip append float range len append defaultdict zip min pi _distance_hist nansum histogram max len min _distance_hist nansum histogram max len isinstance copy add get_all_distances vstack histogram ravel enumerate update sorted list map NeighborList enumerate asarray format _neighbor_iterator fromkeys isinstance fill_diagonal reshape set get_all_distances append zeros get_atomic_numbers get_radius range len update hasattr get_connectivity_matrix neighborlist NeighborList sum pdist tril squareform connectivity vstack zeros sum enumerate len seed int shuffle array len append T join sorted replace md5 get_positions wrap array_str flatten round get_cell array hexdigest split check_output join get_column_names DescriptorDatabase reshape query_db argsort probability_density GaussianProcess predict RidgeRegression dot find_optimal_regularization zip len GaussianProcess predict lml_opt scale_test print get_data list2kdict minimize print kdicts2list shape log_marginal_likelihood append basinhopping prepare_kernels standardize target_standardize kdicts2list linspace subplot str set_xlabel axvline shape log10 ceil append range semilogx copy sqrt print min log_marginal_likelihood set_ylabel array len TestSuite TestLoader TextTestRunner loadTestsFromTestCase append run RidgeRegression dot find_optimal_regularization zip append len pop join remove format green print rs call red split exp cos sin exp xlabel add_subplot ylabel axis RidgeRegression dot find_optimal_regularization zip append len GaussianProcess predict len set_fontsize get_yticklabels xlabel grid get_xticklabels ylabel set_fontname set_visible parallel_coordinates set_ticks_position figure GaussianProcess predict | # CatLearn > An environment for atomistic machine learning in Python for applications in catalysis. [](https://zenodo.org/badge/latestdoi/130307939) [](https://travis-ci.org/SUNCAT-Center/CatLearn) [](https://coveralls.io/github/SUNCAT-Center/CatLearn?branch=master) [](http://catlearn.readthedocs.io/en/latest/?badge=latest) [](https://badge.fury.io/py/CatLearn) [](https://www.gnu.org/licenses/gpl-3.0) Utilities for building and testing atomic machine learning models. Gaussian Processes (GP) regression machine learning routines are implemented. These will take any numpy array of training and test feature matrices along with a vector of target values. In general, any data prepared in this fashion can be fed to the GP routines, a number of additional functions have been added that interface with [ASE](https://wiki.fysik.dtu.dk/ase/). This integration allows for the manipulation of atoms objects through GP predictions, as well as dynamic generation of descriptors through use of the many ASE functions. CatLearn also includes the [MLNEB](https://github.com/SUNCAT-Center/CatLearn/tree/master/tutorials/11_NEB) algorithm for efficient transition state search, and the [MLMIN](https://github.com/SUNCAT-Center/CatLearn/tree/master/tutorials/12_MLMin) algorithm for efficient atomic structure optimization. Please see the [tutorials](https://github.com/SUNCAT-Center/CatLearn/tree/master/tutorials) for a detailed overview of what the code can do and the conventions used in setting up the predictive models. For an overview of all the functionality available, please read the [documentation](http://catlearn.readthedocs.io/en/latest/). ## Table of contents - [Installation](#installation) - [Tutorials](#tutorials) | 933 |
SWKoreaBME/D-Unet_PyTorch | ['medical image segmentation', 'medical diagnosis', 'lesion segmentation', 'semantic segmentation'] | ['D-UNet: a dimension-fusion U shape network for chronic stroke lesion segmentation'] | DUnet.py loss.py DUnet_parts.py DUnet weights_init_he BN_block2d BN_block3d up_block D_SE_Add Squeeze SE_block Expand enhanced_mixing_loss kaiming_uniform_ isinstance Conv3d Conv2d zeros_ bias weight ones_like view zeros_like clamp where mean pow eq sum log | # D-Unet: a dimension-fusion U shape network for chronic stroke lesion segmentation #### D-Unet implemented in PyTorch <img src="./fig1.png"/> ## Usage ```python from DUnet import DUnet import torch BATCH_SIZE = 4 input_batch = torch.Tensor(BATCH_SIZE, 4, 192, 192) model = DUnet(in_channels = 4) | 934 |
SZUHvern/D-UNet | ['medical image segmentation', 'medical diagnosis', 'lesion segmentation', 'semantic segmentation'] | ['D-UNet: a dimension-fusion U shape network for chronic stroke lesion segmentation'] | Stroke_segment.py data_load.py Statistics.py model.py data_toxn nii_to_h5 data_adjust load_h5 squeeze_excite_block BN_block3d D_concat Unet3d SegNet squeeze BN_block expand origin_block Unet_origin D_SE_concat D_Add conv_bn_block fcn_8s D_SE_Add Unet D_Unet D_concat_SE D_Add_SE zeros_like max str transpose append range concatenate get_fdata close shuffle listdir enumerate load deepcopy print min File create_dataset array len str File close create_dataset max uint8 asarray zeros_like ANTIALIAS multiply File min shuffle array resize append crop range enumerate len int zeros_like print zeros range len expand_dims squeeze_excite_block squeeze_excite_block squeeze_excite_block squeeze_excite_block multiply BN_block3d BN_block D_SE_Add Model Input Model Input BN_block Model Input origin_block BN_block3d Model Input BN_block Model Input Model Input load_weights | # D-UNet D-UNet: a dimension-fusion U shape network for chronic stroke lesion segmentation  # Author Yongjin Zhou, Weijian Huang, Pei Dong, Yong Xia, and Shanshan Wang. # 项目简介 ## 1. 功能 采用D-UNet实现对ATLAS数据集的图像分割,兼顾了3D特征提取及高效的实现。 ## 2. 性能 |DSC|Recall|Precision| Total parameters| | 935 |
SaeedNajafi/OCD-Learning | ['speech recognition'] | ['Optimal Completion Distillation for Sequence Learning'] | setup.py tests/test_ocd.py ocd/__init__.py OCD test_forward expected_policy test__mask test_edit_q_values test_loss expected_edit_dist_mask test_policy expected_q_values fix_seed seed manual_seed_all manual_seed is_available len edit_distance_mask sequence_mask LongTensor edit_distance_q_values LongTensor compute_optimal_pi FloatTensor byte FloatTensor log_softmax size randn_like compute_optimal_pi loss LongTensor ocd FloatTensor randn_like OCD compute_optimal_pi | [](https://circleci.com/gh/SaeedNajafi/pytorch-ocd/tree/master) # Optimal Completion Distillation (OCD) Training Implementation of the Optimal Completion Distillation for Sequence Labeling </br> source : https://arxiv.org/abs/1810.01398 ## Requirements `python3`, `pytorch 1.0.0` ## Install ```sh python3 -m venv env source env/bin/activate | 936 |
SaeedNajafi/pytorch-ocd | ['speech recognition'] | ['Optimal Completion Distillation for Sequence Learning'] | setup.py tests/test_ocd.py ocd/__init__.py OCD test_forward expected_policy test__mask test_edit_q_values test_loss expected_edit_dist_mask test_policy expected_q_values fix_seed seed manual_seed_all manual_seed is_available len edit_distance_mask sequence_mask LongTensor edit_distance_q_values LongTensor compute_optimal_pi FloatTensor byte FloatTensor log_softmax size randn_like compute_optimal_pi loss LongTensor ocd FloatTensor randn_like OCD compute_optimal_pi | [](https://circleci.com/gh/SaeedNajafi/pytorch-ocd/tree/master) # Optimal Completion Distillation (OCD) Training Implementation of the Optimal Completion Distillation for Sequence Labeling </br> source : https://arxiv.org/abs/1810.01398 ## Requirements `python3`, `pytorch 1.0.0` ## Install ```sh python3 -m venv env source env/bin/activate | 937 |
Saehyung-Lee/OAT | ['data augmentation'] | ['Removing Undesirable Feature Contributions Using Out-of-Distribution Data'] | TRADES/dataset.py TRADES/models/resnet.py AT/cifar10_input.py AT/eval.py TRADES/trades.py TRADES/models/wideresnet.py TRADES/pgd_attack_cifar10.py AT/pgd_attack.py AT/tiny_input.py AT/train.py TRADES/train_trades_cifar10.py TRADES/models/__init__.py AT/model.py TRADES/models/small_cnn.py TRADES/models/net_mnist.py DataSubset AugmentedDataSubset AugmentedCIFAR10Data CIFAR10Data mkdir_p download maybe_download_and_extract evaluate Model LinfPGDAttack DataSubset AugmentedDataSubset OODData AugmentedOODData comp_and_save_ckpt evaluate TinyImageNet _cw_whitebox _pgd_blackbox eval_adv_test_blackbox one_hot_tensor _pgd_whitebox eval_adv_test_whitebox main softCrossEntropy l2_norm trades_loss one_hot_tensor squared_l2_norm eval_train eval_test eval_adv_test adjust_learning_rate train_oat main train Net Net_binary ResNet ResNet18 ResNet34 Bottleneck ResNet101 test ResNet50 BasicBlock ResNet152 SmallCNN BasicBlock NetworkBlock WideResNet makedirs join st_size print stat mkdir_p isfile join isdir print extractall download int list perturb print min tqdm num_correct ceil range run join remove min eval save add_summary append Summary Summary fill_ data model Variable backward clamp random zero_grad SGD sign to sum range data model Variable backward clamp random zero_grad SGD sign to sum range data backward Variable clamp print random zero_grad SGD sign to sum range model_target _cw_whitebox print tqdm eval _deepfool_whitebox _pgd_whitebox print eval _pgd_blackbox load multi_gpu print source_model_path target_model_path eval_adv_test_whitebox eval_adv_test_blackbox DataParallel load_state_dict model_path to white_box_attack view print data KLDivLoss sub_ renorm_ model softCrossEntropy criterion_ce add_ zero_grad SGD sign div_ max view criterion_kl range detach log_softmax randn_like requires_grad_ eval softmax norm backward Variable clamp min any train step len format backward print dataset zero_grad trades_loss one_hot_tensor item step enumerate len format backward print step zero_grad dataset trades_loss item train next enumerate len format print eval dataset len format print eval dataset len format print eval dataset len param_groups lr SGD adjust_learning_rate save load_model_dir eval_test iter range state_dict format oat load_epoch train_oat join time parameters train randn print ResNet18 size net | # Removing Undesirable Feature Contributions using Out-of-Distribution Data This repository is the official implementation of "Removing Undesirable Feature Contributions using Out-of-Distribution Data", published as a conference paper at [ICLR 2021](https://openreview.net/forum?id=eIHYL6fpbkA). You can download the OOD dataset and pre-trained models on CIFAR-10 here: - [OOD dataset](https://drive.google.com/file/d/13Nyw3b8lBfBTbVnUEw_yyFGW7x6rjWRD/view?usp=sharing) - [OAT+PGD](https://drive.google.com/file/d/1uvUECJJi3ccgWilNFHqoplGQPiE-iMmF/view?usp=sharing) - [OAT+TRADES](https://drive.google.com/file/d/1p7UEBeVjQfu3W5CWzhvkUga5iDxGjDTL/view?usp=sharing) - [OAT+RST](https://drive.google.com/file/d/1PbY7geYAUOhcKsN8dSc5q6UVg_x_AWys/view?usp=sharing) You can also create your own OOD datasets using the work of [Carmon et al.](https://github.com/yaircarmon/semisup-adv) | 938 |
SamuelMarks/doctrans | ['stochastic optimization'] | ['Adam: A Method for Stochastic Optimization'] | cdd/tests/test_cli_sync.py cdd/conformance.py cdd/tests/test_cst_utils.py cdd/routes/parser_utils.py cdd/tests/test_cli_openapi.py cdd/openapi/parse.py cdd/openapi/parser_utils.py cdd/openapi/emitter_utils.py cdd/tests/test_ast_cst_utils.py cdd/docstring_utils.py cdd/tests/mocks/argparse.py cdd/tests/test_route_emit.py cdd/tests/test_parsers.py cdd/__init__.py cdd/tests/utils_for_tests.py cdd/tests/test_emitters.py cdd/routes/parse.py cdd/tests/__init__.py cdd/defaults_utils.py cdd/gen.py cdd/tests/test_doctrans_utils.py setup.py cdd/parse.py cdd/tests/test_pure_utils.py cdd/tests/test_cli_sync_properties.py cdd/pure_utils.py cdd/tests/mocks/cstify.py cdd/tests/test_parser_utils.py cdd/tests/test_default_utils.py cdd/tests/mocks/routes.py cdd/docstring_parsers.py cdd/ast_cst_utils.py cdd/openapi/__init__.py cdd/tests/test_source_transformer.py cdd/tests/test_cli_doctrans.py cdd/tests/test_gen_routes.py cdd/tests/test_setup.py cdd/pkg_utils.py cdd/emitter_utils.py cdd/tests/mocks/doctrans.py cdd/openapi/gen_routes.py cdd/__main__.py cdd/doctrans.py cdd/tests/mocks/cst.py cdd/tests/test_doctrans.py cdd/tests/mocks/eval.py cdd/tests/test_exmod_utils.py cdd/tests/test_ast_utils.py cdd/source_transformer.py cdd/tests/test_cli.py cdd/tests/test_docstring_utils.py cdd/emit.py cdd/tests/test_emitter_utils.py cdd/cst.py cdd/tests/test_cst.py cdd/tests/mocks/__init__.py cdd/tests/mocks/openapi.py cdd/tests/test_cli_exmod.py cdd/doctrans_utils.py cdd/tests/mocks/classes.py cdd/tests/mocks/exmod.py cdd/routes/emit.py cdd/tests/test_sync_properties.py cdd/parser_utils.py cdd/tests/test_cli_gen.py cdd/tests/test_cli_gen_routes.py cdd/tests/mocks/sqlalchemy.py cdd/sync_properties.py cdd/cst_utils.py cdd/tests/mocks/ir.py cdd/tests/test_pkg_utils.py cdd/tests/test_utils_for_tests.py cdd/exmod.py cdd/tests/test_openapi_sub.py cdd/routes/__init__.py cdd/tests/test_ast_equality.py cdd/ast_utils.py cdd/tests/test_openapi_bulk.py cdd/tests/mocks/json_schema.py cdd/openapi/gen_openapi.py cdd/tests/test_gen.py cdd/tests/test_route_parse.py cdd/routes/emit_constants.py cdd/openapi/emit.py cdd/tests/test_exmod.py cdd/exmod_utils.py cdd/tests/mocks/methods.py cdd/tests/mocks/docstrings.py cdd/tests/test_marshall_docstring.py cdd/tests/test_conformance.py setup_py_main main to_funcs maybe_replace_function_args debug_doctrans find_cst_at_ast maybe_replace_doc_str_in_function_or_class Delta maybe_replace_function_return_type _to_code get_value infer_type_and_default Undedined func_arg2param param2argparse_param _resolve_arg get_function_type find_in_ast is_argparse_add_argument _infer_type_and_default_for_list_or_tuple parse_to_scalar _parse_default_from_ast set_value cmp_ast is_argparse_description get_ass_where_name set_docstring _generic_param2ast emit_arg merge_modules get_at_root set_arg to_annotation _parse_node_for_arg RewriteAtQuery to_type_comment it2literal annotate_ancestry param2ast get_doc_str node_to_dict find_ast_type del_ass_where_name _infer_type_and_default_from_quoted set_slice emit_ann_assign merge_assignment_lists _default_options _get_name_from_namespace ground_truth _conform_filename cst_parse reindent_block_with_pass_body set_prev_node infer_cst_type cst_parse_one_node cst_scanner get_construct_name cst_parser cst_scan needs_quoting extract_default _remove_default_from_param set_default_doc ast_parse_fix remove_defaults_from_intermediate_repr _parse_out_default_and_doc _scan_phase_numpydoc_and_google _set_name_and_type _infer_default parse_docstring _set_param_values _scan_phase _scan_phase_rest _parse_phase_numpydoc_and_google _fill_doc_with_afterward _parse_phase _parse_phase_rest _return_parse_phase_numpydoc_and_google Style _get_start_of_last_found header_args_footer_to_str derive_docstring_format _get_token_last_idx_if_no_next_token _get_end_of_last_found_numpydoc _find_end_of_args_returns _last_doc_str_token _get_token_start_idx emit_param_str _get_end_of_last_found _get_token_last_idx ensure_doc_args_whence_original parse_docstring_into_header_args_footer doctrans has_type_annotations clear_annotation doctransify_cst DocTrans sqlalchemy_table function argparse_function file json_schema class_ sqlalchemy docstring param2json_schema_property generate_repr_method _make_call_meth _handle_value parse_out_param get_internal_body ast_parse_fix _parse_return _handle_keyword RewriteName param_to_sqlalchemy_column_call interpolate_defaults exmod emit_file_on_hierarchy _emit_symbol get_module_contents gen sqlalchemy_table _class_from_memory _inspect function _merge_inner_function json_schema class_ sqlalchemy docstring argparse_ast _inspect_process_ir_param infer _join_non_none _interpolate_return json_schema_property_to_param column_call_to_param ir_merge get_source relative_filename quote set_attr rpartial strip_starting pluralise blockwise count_iter_items is_triple_quoted is_ir_empty no_magic_dir2attr identity reindent paren_wrap_code deindent unquote multiline assert_equal balanced_parentheses filename_from_mod_or_filename strip_split lstrip_namespace location_within emit_separating_tabs indent_all_but_first update_d get_module has_nl code_quoted sanitise set_item diff to_code ast_parse sync_properties sync_property require_file_existent main _build_parser openapi components_paths_from_name_model_route_id_crud openapi_bulk gen_routes upsert_routes openapi extract_entities read create destroy create_util bottle get_route_meta TestDefaultUtils TestAstCstUtils TestAstEquality TestAstUtils TestCli TestCliDocTrans TestCliExMod TestCliGen TestCliGenRoutes TestOpenApi TestCliSync TestCliSyncProperties TestConformance TestCst TestCstUtils TestDocstringUtils TestDocTrans TestDocTransUtils TestEmitters TestEmitterUtils TestExMod TestExmodUtils TestGen populate_files TestGenRoutes populate_files TestMarshallDocstring TestOpenApi TestOpenApiBulk TestParsers TestParserUtils TestPkgUtils TestPureUtils TestRouteEmit TestRouteEmit TestSetupPy TestSourceTransformer TestSyncProperties populate_files TestUtilsForTests remove_args_from_docstring ShowSourceLoader replace_docstring unittest_main run_cli_test run_ast_test inspectable_compile mock_function reindent_docstring C f attrgetter setup map filter body main print ljust format print format __name__ enumerate ne nop partition formatted_doc_str insert name debug_doctrans replaced removed __name__ added deepcopy add_return_typ value nop cmp_ast name remove_return_typ debug_doctrans replaced FunctionDefinitionStart body returns removed __name__ added deepcopy nop rpartition attrgetter __name__ name map debug_doctrans replaced FunctionDefinitionStart body removed range added len needs_quoting get_value __name__ isinstance ast_parse_fix set_value isinstance tuple rpartial filter body extract_default get setdefault infer_type_and_default _resolve_arg ast_parse_fix _parse_node_for_arg walk tuple id isinstance value isinstance Expr set_value pop setattr isinstance args filter body next enumerate len _location list kwonlyargs isinstance args map targets iter_child_nodes walk enumerate arg isinstance arg isinstance _parse_default_from_ast isinstance dumps any code_quoted __name__ _infer_type_and_default_for_list_or_tuple get_value __name__ get_value isinstance AST isinstance _fields getattr isinstance zip isinstance list filter get_value isinstance attrgetter del_ass_where_name tuple map get_ass_where_name from_iterable Assign append deepcopy update items list _get_name_from_namespace map OrderedDict parse_func strip_split pluralise getattr find_in_ast split print file realpath visit emit_func RewriteAtQuery replaced expanduser find_in_ast cst_scanner cst_parser enumerate find append join enumerate cst_scan clear join is_triple_quoted all endswith tuple strip map balanced_parentheses filter startswith append add_and_clear enumerate split deque partial map any frozenset startswith tuple strip map dict filter get_construct_name split Name ast_parse_fix strip isinstance strip location_within len enumerate count_iter_items int partial frozenset literal_eval isdecimal takewhile update deepcopy partial extract_default update format update _scan_phase _parse_phase derive_docstring_format name attrgetter map count_iter_items get isspace location_within deepcopy clear white_spacer partial insert map _return_parse_phase_numpydoc_and_google append takewhile enumerate isspace splitlines startswith append range enumerate len join tuple map append len name attrgetter map get _infer_default rstrip format tuple lstrip unquote isinstance get_value literal_eval __name__ update next _fill_doc_with_afterward filter next partial _set_name_and_type update _remove_default_from_param strip map OrderedDict any startswith interpolate_defaults find count_iter_items join isspace clear filter any startswith append takewhile enumerate clear isspace copy append enumerate len range append clear range len range len count_iter_items isspace takewhile count_iter_items isspace partial namedtuple takewhile PrevParam count_iter_items _get_start_of_last_found isspace derive_docstring_format _get_token_last_idx_if_no_next_token _find_end_of_args_returns filter _last_doc_str_token any _get_end_of_last_found startswith takewhile count_iter_items isspace map _get_token_start_idx _get_token_last_idx takewhile indent parse_docstring_into_header_args_footer count_iter_items isspace format partition indent has_nl takewhile rpartition len partial map rest any google numpydoc deepcopy list doctransify_cst cst_parse visit fix_missing_locations ast_parse setattr hasattr isinstance maybe_replace_function_args find_cst_at_ast maybe_replace_doc_str_in_function_or_class maybe_replace_function_return_type walk get_internal_body update get list partial map visit fix_missing_locations get join list items partial format header_args_footer_to_str map splitlines parse_docstring_into_header_args_footer to_code Module format_str items list tuple map get_internal_body filter dict items partial map isinstance extract_default format set_value get_value next extract_default unquote rstrip list filter len get pop list get_value map elts startswith append set_value isinstance print Name get_value Load lstrip Call startswith append keyword tuple keys groupby itemgetter tuple sep list basename map file dirname format partial import_module deque join items Module print __file__ any makedirs items format isinstance getfile _emit_symbol rpartial sep map dirname rpartition get format parse replace partial attrgetter close isfile join print filter any body makedirs merge_modules format Module print Assign file merge_assignment_lists tuple strip sanitise_emit_name fix_missing_locations list map from_iterable getattr rpartition update parse get_at_root format get_docstring eval compile Module filter get_module body to_code rstrip find_ast_type isinstance get_docstring get_value targets OrderedDict dict lstrip _merge_inner_function deque _inspect name get_docstring _merge_inner_function class_ ir_merge get_source function rpartial filter ir_merge next walk items partial map OrderedDict filter parser isfunction get_source ir_merge abs get_function_type len OrderedDict getattr update arg partial replace get_docstring islice _interpolate_return ir_merge setattr pop deepcopy _inspect isinstance cycle body docstring args list value filterfalse parse_docstring parse_out_param get_docstring get_value OrderedDict is_argparse_add_argument deepcopy OrderedDict docstring isinstance map get_value assert_equal filter keywords ir_merge next get_docstring value get_value get update itemgetter map _join_non_none keys frozenset from_iterable format endswith lstrip_typings lstrip __name__ default rstrip rpartial OrderedDict filter next list format map from_iterable dict filter args endswith format pop get partial attrgetter isinstance bases rpartial get_value map filter any next args casefold startswith lstrip split update lstrip op len enumerate cmp tuple range len deque zip count tuple type setattr annotate_ancestry get_docstring parse format file zip sync_property annotate_ancestry list value hasattr visit eval RewriteAtQuery AnnAssign find_in_ast compile strip_split add_argument add_mutually_exclusive_group add_parser ArgumentParser add_subparsers gen_routes pluralise exmod truth doctrans command getattr openapi_bulk filename require_file_existent parse_args gen expanduser sum format _build_parser Namespace realpath deque pop error sync_properties output_filename isfile error format deque map format items create itemgetter map extend filter iter sqlalchemy filename_from_mod_or_filename next keys walk append rpartial get_names filter filename_from_mod_or_filename keys walk extract_entities format replace append isspace add_then_clear_stack enumerate isinstance parse_docstring get_docstring decorator_list filter next find join deepcopy format function Module class_ deepcopy path filter close format deepcopy isinstance assertTrue tuple assertEqual cmp_ast map to_code output_checker assertEqual assertIsNone main __dict__ name ShowSourceLoader module_from_spec exec NamedTemporaryFile spec_from_loader setattr compile format set_value get_docstring Expr abs Expr isspace lstrip filter any splitlines startswith append | cdd-python ==========   [](https://opensource.org/licenses/Apache-2.0) [](https://github.com/offscale/cdd-python/actions)   [](https://codecov.io/gh/offscale/cdd-python) [](https://github.com/psf/black) | 939 |
Sandipan99/POLAR | ['word embeddings'] | ['The POLAR Framework: Polar Opposites Enable Interpretability of Pre-Trained Word Embeddings'] | Downstream Task/Discrim_Attr/get_data.py Downstream Task/Discrim_Attr/classify_discrim_attr_TASK.py Downstream Task/sentiment/classify_sentiment.py Downstream Task/np_bracketing/classify_bracketing.py Downstream Task/Discrim_Attr/classify_discrim_attr.py Downstream Task/newsgroups/get_newsgroup_data.py Downstream Task/TREC/create_data.py Downstream Task/np_bracketing/get_data.py Downstream Task/sentiment/get_data.py Downstream Task/newsgroups/classify.py Downstream Task/TREC/classify_task.py Downstream Task/TREC/classify.py loadVectors getFeats getSimilarityScoreForWords trainAndTest getOneHot main2 f1_score main1 getSimilarity loadVectors getFeats getSimilarityScoreForWords trainAndTest getOneHot main2 f1_score main1 getSimilarity loadVectors getFeats trainAndTest getOneHot main get_everything get_Xy loadVectors getFeats trainAndTest getOneHot main loadVectors getFeats trainAndTest getOneHot main loadVectors getFeats trainAndTest getOneHot main loadVectors getFeats trainAndTest getOneHot main get_label read_lines get_Xy print readlines split array len zeros array extend zeros enumerate score fit float load max loadVectors print trainAndTest append array open load int loadVectors print append accuracy_score enumerate open load int loadVectors trainAndTest append array open word_tokenize lower append range len int dump fetch_20newsgroups print get_Xy open len max print get_label print | Sandipan99/POLAR | 940 |
Sandy-Zeng/NPAttack | ['adversarial attack'] | ['Improving Query Efficiency of Black-box Adversarial Attack'] | ANP/ANP_MNIST.py run_targeted.py NPAttack_untargeted.py resnet18.py NPAttack_targeted.py wrn/WRN.py utils/np_utils.py run_untargeted.py utils/dataloader.py ANP/ANP_CIFAR.py ANP/NP_IMAGENET.py ANP/ANP_CIFAR_train.py NPAttack_IMAGENET.py ANP/ANP_IMAGENET.py mnist_mlp.py mnist MLP decode load_attack_model load_NP_model recons CW_loss generate_data upsample get_context NP_Attack torch_arctanh sample_image_cifar generate_grid save load_attack_model load_NP_model CW_loss NP_Attack_Targeted torch_arctanh sample_image_cifar sample_image load_attack_model load_NP_model CW_loss NP_Attack sample_image_cifar sample_image ResNet ResNet18 ResNet34 Bottleneck ResNet101 test ResNet50 BasicBlock ResNet152 NP RGB2Y NP idx_to_y idx_to_x get_context_idx rgb2ycbcr test generate_grid np_loss train kl_div_gaussians RGB2Y NP idx_to_y idx_to_x get_context_idx rgb2ycbcr test generate_grid np_loss train kl_div_gaussians RGB2Y NP idx_to_y idx_to_x get_context_idx rgb2ycbcr test generate_grid np_loss train kl_div_gaussians NP ADV_MNIST RGB2Y generate_grid_cifar idx_to_y idx_to_x get_context_idx rgb2ycbcr construct_gen generate_grid initialize_weights BasicBlock Network load DataParallel load_state_dict MLP view stack unsqueeze linspace cat min mkdir cpu save_image cat clamp interpolate fromarray uint8 squeeze astype numpy makedirs int view model expand append to range cat detach int time permute interpolate zeros to range detach view model expand save to detach torch_arctanh I save argmax std recons view expand shape to range detach get_context mean item float tanh time CW_loss print clamp reshape cpu numpy E mean zeros sum log eval to print to module append array range choice cpu contiguous min expand append tensor save_image cat makedirs rsample construct_gen I tensor argmax std idx_to_x view expand shape permute append to range detach LR get_context_idx mean item float idx_to_y CW_loss print Variable clamp reshape cpu numpy E load mnist WRN print DataParallel attack_path load_state_dict cuda load mkdir rsample construct_gen tensor idx_to_x permute append LR get_context_idx idx_to_y Variable randn print ResNet18 size net dot T array tensor array range index_select index_select expand sum kl_div_gaussians sum join format log_dir view get_context_idx model backward print dataset zero_grad makedirs expand np_loss item step enumerate len join format log_dir eval Normal save_image str permute cat att_type min cpu sample idx_to_y idx_to_x randint model cuda cuda view stack unsqueeze cuda cat data isinstance Conv2d zero_ uniform_ BatchNorm2d kaiming_normal_ Linear | # NPAttack_ECCV2020 This is our Pytorch implementation of NPAttack. **Improving Query Efficiency of Black-box Adversarial Attack (ECCV2020)** ## Pre-trained model You can download the pre-trained NP model for MNIST, CIFAR and ImageNet from https://drive.google.com/file/d/1TysxLn1SdVlPuwATPwSmq0T141oGqlRP/view?usp=sharing and put them into the folder of ./np_pretrained The pre-trained target model in our experiments are available in https://drive.google.com/file/d/1uN22WfasesNfotAMVHCJ-9KjVh0bWdeP/view?usp=sharing , you can downloads them and put them into the folder of ./target_model or train you own models (Noted that if you train your own model, please be sure the input images are normalized to [-0.5, 0.5] so as to match the normalization method of NP model ). ## NP model pre-training 1. NP model for MNIST ```python | 941 |
Sanster/tf_ctpn | ['scene text detection'] | ['Detecting Text in Natural Image with Connectionist Text Proposal Network'] | lib/utils/visualization.py lib/datasets/imdb.py lib/nets/vgg16.py tools/ICDAR13/script.py tools/ICDAR13_Det/rrc_evaluation_funcs.py lib/model/test.py lib/model/__init__.py tools/trainval_net.py lib/setup.py lib/setup_cpu.py lib/datasets/pascal_voc.py lib/model/bbox_transform.py lib/text_connector/detectors.py tools/convert_utils.py lib/nets/mobilenet/conv_blocks.py tools/mlt17_to_voc.py lib/datasets/__init__.py lib/datasets/factory.py lib/utils/common.py lib/nets/resnet_v1.py tools/demo.py lib/text_connector/__init__.py lib/datasets/voc_eval.py lib/nets/network.py lib/utils/__init__.py lib/nms/py_cpu_nms.py lib/model/nms_wrapper.py lib/roi_data_layer/roidb.py tools/icdar13_to_voc.py lib/roi_data_layer/__init__.py tools/freeze_graph.py tools/icdar.py lib/layer_utils/generate_anchors.py lib/nets/mobilenet/mobilenet_v2.py lib/text_connector/text_connect_cfg.py lib/text_connector/other.py lib/text_connector/text_proposal_connector_oriented.py lib/utils/timer.py tools/anchor_drawer.py tools/ICDAR13_Det/script.py tools/ICDAR13/rrc_evaluation_funcs.py lib/nets/mobilenet_v2.py lib/setup_cpu_win.py lib/datasets/ds_utils.py main.py lib/layer_utils/proposal_top_layer.py tools/_init_paths.py lib/text_connector/text_proposal_graph_builder.py tools/icdar13_split_label.py lib/model/train_val.py lib/utils/blob.py lib/text_connector/text_proposal_connector.py lib/layer_utils/anchor_target_layer.py lib/model/config.py tools/ICDAR15/rrc_evaluation_funcs.py lib/roi_data_layer/layer.py tools/ICDAR15/script.py lib/nets/mobilenet/mobilenet.py lib/layer_utils/proposal_layer.py lib/roi_data_layer/minibatch.py lib/utils/helper.py lib/nets/squeezenet.py find_in_path customize_compiler_for_nvcc custom_build_ext locate_cuda find_in_path customize_compiler_for_nvcc custom_build_ext find_in_path custom_build_ext unique_boxes xywh_to_xyxy validate_boxes xyxy_to_xywh filter_small_boxes get_imdb list_imdbs imdb pascal_voc parse_rec voc_eval voc_ap _unmap _compute_targets anchor_target_layer generate_anchors _scale_enum _whctrs _ratio_enum generate_anchors_pre _mkanchors proposal_layer proposal_top_layer clip_boxes bbox_transform bbox_transform_inv cfg_from_list get_output_tb_dir cfg_from_file _merge_a_into_b get_output_dir nms _clip_boxes im_detect _get_image_blob _get_blobs train_net get_training_roidb filter_roidb SolverWrapper MobileNetV2 Network Resnetv1 resnet_arg_scope SqueezeNet vgg16 expand_input_by_factor _v1_compatible_scope_naming _fixed_padding split_separable_conv2d expanded_conv _make_divisible split_conv _split_divisible mobilenet depth_multiplier _scope_all safe_arg_scope _fixed_padding apply_activation op _set_arg_scope_defaults _make_divisible NoOpScope mobilenet_base global_pool training_scope mobilenet mobilenet_base training_scope py_cpu_nms RoIDataLayer get_minibatch _get_image_blob prepare_roidb TextDetector clip_boxes threshold Graph Config TextProposalConnector TextProposalConnector TextProposalGraphBuilder im_list_to_blob prep_im_for_blob check_dir read_rgb_img Timer _draw_single_box draw_bounding_boxes parse_args draw_anchors generate_xml build_voc_dirs vis_detections save_result draw_rpn_boxes demo parse_args recover_scale get_model_filenames main parse_args save_result_txt demo generate_xml _is_hard get_ltrb clip_line get_clockwise draw_four_vectors get_img_scale split_text_line test Point split_text_line2 main Line parse_line draw_bounding_box parse_args combined_roidb add_path evaluation_imports evaluate_method default_evaluation_params validate_data evaluation_imports evaluate_method default_evaluation_params validate_data evaluation_imports evaluate_method default_evaluation_params validate_data pathsep pjoin exists split find_in_path items pjoin pathsep dirname sep append _compile compiler_so dot array unique int parse findall text append find arange concatenate size maximum sum max range parse_rec cumsum argmax max sum range eps format astype mkdir float enumerate minimum join print sort maximum voc_ap argsort zeros bool array len RPN_BBOX_INSIDE_WEIGHTS _unmap bbox_overlaps argmax RPN_FG_FRACTION ones sum RPN_BATCHSIZE ascontiguousarray choice fill RPN_POSITIVE_WEIGHT empty int RPN_CLOBBER_POSITIVES reshape _compute_targets zeros array fill empty bbox_transform arange _whctrs ceil array _mkanchors generate_anchors arange reshape transpose astype float32 int32 meshgrid astype int32 sqrt _whctrs round _mkanchors _whctrs _mkanchors decode nms RPN_POST_NMS_TOP_N clip_boxes reshape hstack bbox_transform_inv RPN_NMS_THRESH RPN_PRE_NMS_TOP_N RPN_TOP_N clip_boxes reshape hstack bbox_transform_inv choice zeros transpose log dtype exp astype shape zeros minimum maximum join EXP_DIR name abspath ROOT_DIR makedirs join EXP_DIR name abspath ROOT_DIR makedirs items ndarray isinstance type array _merge_a_into_b literal_eval zip split MAX_SIZE min astype float32 SCALES shape resize append im_list_to_blob float max _get_image_blob minimum maximum _clip_boxes array test_image _get_blobs print prepare_roidb append_flipped_images USE_FLIPPED print format len ConfigProto filter_roidb pad int max append range identity conv2d zip append _split_divisible enumerate split items list hasattr _make_divisible pop get deepcopy get as_list as_list pool_op convert_to_tensor set_shape xavier_initializer truncated_normal_initializer deepcopy append maximum minimum USE_ALL_GT _get_image_blob randint empty array len prep_im_for_blob read_rgb_img PIXEL_MEANS range len image_index toarray roidb argmax max range image_path_at len threshold zeros max range len min astype float32 shape resize float max makedirs imread cvtColor COLOR_BGR2RGB line Draw rectangle ceil getsize fromarray int uint8 copy _draw_single_box round array range error add_argument ArgumentParser append rectangle join mkdir str int Document append float append_xml_node_attr show add_line format subplots set_title text draw Line2D axis tight_layout imshow join line imwrite append int join imwrite putText FONT_HERSHEY_SIMPLEX copy pre_process rectangle recover_scale enumerate toc vis_detections print save_result COLOR_RGB2BGR tic detect read_rgb_img draw_rpn_boxes TextDetector Timer cvtColor recover_scale im_detect join result_dir print exit tag makedirs get_checkpoint_state model_checkpoint_path join basename join makedirs split join range int split min max shape float min max ceil int range append append sorted line line get_ltrb int get_clockwise asarray draw_four_vectors y Line print boxPoints waitKey Point imshow ceil minAreaRect draw_bounding_box contain range append tt uint8 asarray line boxPoints astype waitKey minAreaRect imshow zeros get_ltrb imwrite resize open list basename shape writelines append imread format get_img_scale splitext generate_xml listdir join print sort build_voc_dirs extend filter split_text_line2 split get_imdb imdb extend classes insert load_zip_file validate_lines_in_file area xmin iteritems ymax one_to_one_match many_to_one_match decode_utf8 append range format xmax import_module load_zip_file empty get_tl_line_values_from_file_contents float ymin namedtuple int8 one_to_many_match Rectangle zeros center_distance len compute_ap polygon_from_points get_intersection_over_union rectangle_to_polygon get_intersection | # tf_ctpn A tensorflow implement of [CTPN: Detecting Text in Natural Image with Connectionist Text Proposal Network](https://arxiv.org/abs/1609.03605). Most of code in this project are adapted from [CTPN](https://github.com/tianzhi0549/CTPN), [tf-faster-rcnn](https://github.com/endernewton/tf-faster-rcnn) and [text-detection-ctpn](https://github.com/eragonruan/text-detection-ctpn) The result of pretrained model on ICDAR13: | Net | Dataset | Recall | Precision | Hmean | |-------|----------|---------|-------------|------------| | Origin CTPN |ICDAR13 training data + ?|73.72% | 92.77% | 82.15%| |vgg16| MLT17 latin/chn + ICDAR13 training data | 74.26% | 82.46% | 78.15% | | 942 |
SarahQiong/CTDGMR | ['density estimation'] | ['Gaussian Mixture Reduction with Composite Transportation Divergence'] | CTDGMR/utils.py CTDGMR/greedy.py CTDGMR/distance.py CTDGMR/optGMR.py CTDGMR/barycenter.py reduction.py CTDGMR/minCTD.py barycenter GMM_CS GMM_CTD Gaussian_distance GMM_L2 GMR_greedy moment_preserving_merge bound_on_KL wbarycenter_merge GMR_CTD entropy obj_grads GMR_opt GMR_opt_BFGS opt_obj obj_grads_theta obj_grads_w dmixf log_normal rmixGaussian df sqrtm ones identity shape sum range concatenate sqrt success T compute_sigma minimize print reshape inv dot cholesky zeros det norm T diag pdf pi dot sqrt sqrtm asscalar trace solve_triangular cholesky eye sum log pi sqrtm Gaussian_distance log einsum exp sum range log_normal tile eigvals enumerate det T norm reshape outer emd2 zeros log_normal exp log_normal exp dot T dot sqrtm eigvals sum log T reshape moment_preserving_merge bound_on_KL delete copy dot shape trace Gaussian_distance sum range GMM_L2 wbarycenter_merge T exp log_normal zeros_like dot range T log_normal exp zeros_like reshape dot shape sum range einsum dot sum log_normal exp T log_normal exp zeros_like reshape dot shape sum range einsum sum T zeros_like reshape square pi dot shape cholesky empty log einsum enumerate multinomial vstack check_random_state concatenate reshape T sqrt df | This repository contains the code for the paper **[Gaussian Mixture Reduction with Composite Transportation Divergence ](https://arxiv.org/abs/2002.08410.pdf)** If you use this code, please cite ``` @article{zhang2020unified, title={Gaussian Mixture Reduction with Composite Transportation Divergence}, author={Zhang, Qiong and Zhang, Archer Gong and Chen, Jiahua}, journal={arXiv preprint arXiv:2002.08410}, year={2020} | 943 |
SarikGhazarian/PredictiveEngagement | ['dialogue evaluation'] | ['Predictive Engagement: An Efficient Metric For Automatic Evaluation of Open-Domain Dialogue Systems'] | pytorch_src/main.py pytorch_src/calculate_correlations.py pytorch_src/create_utt_embed.py pytorch_src/engagement_classifier.py pytorch_src/preprocess_data.py calculate_pearson_spearman load_Bert_embeddings make_Bert_embeddings Engagement_cls BiLSTM preprocess_data split_train_test_valid sub_AMT_set create_utts_files pearsonr round max values open spearmanr str list add DictWriter append set writeheader float DictReader items print writerow min cosine_similarity len join str format encode reader dump print BertClient append next enumerate open load items list str print len open round open str list TweetTokenizer append range format replace close float tokenize enumerate join items print write sub len format count print readlines write close append open int format print readlines close open writelines append len int format print writerow readlines strip len DictWriter writeheader open range append split | # PredictiveEngagement This repository contains code for [Predictive Engagement](https://arxiv.org/pdf/1911.01456.pdf) paper. If you use it please cite it as: ``` @inproceedings{ghazarian2020predictive, title={Predictive Engagement: An Efficient Metric For Automatic Evaluation of Open-Domain Dialogue Systems}, author={Ghazarian, Sarik and Weischedel, Ralph and Galstyan, Aram and Peng, Nanyun}, booktitle={The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20)}, paper_url={https://arxiv.org/pdf/1911.01456}, pages={7789–7796}, year={2020} | 944 |
Sausage-SONG/Few-shot-action-recognition | ['action recognition'] | ['Semi-Supervised Few-Shot Atomic Action Recognition'] | attention_pool.py moco/rename.py tcn.py splits/generate_dataset_json.py moco/main_moco.py encoder.py train.py moco/dataset.py ctc.py moco/moco_encoder.py dataset.py ctc/ctc_decode.py utils.py moco/builder.py relation_net.py test.py moco/encoder.py ctc/Common.py ctc/ctc_loss.py moco/tcn.py AttentionPooling AttentionPoolingConv decode make_new_beam logsumexp StandardDataset get_data_loader FinegymDataset ClassBalancedSampler AVADataset use_aug_seq InceptionI3d MaxPool3dSamePadding InceptionModule weights_init Simple3DEncoder Unit3D MultiRelationNetwork RelationNetwork RelationNetworkZero weights_init Chomp1d TemporalConvNet TemporalBlock ctc_length_mask read_split ndarray_equal ctc_predict_single ctc_predict my_load mean_confidence_interval PCA compute_score weights_init ctc_probability ctc_alignment_predict wordToLabelSeq extendByBlanks decode make_new_beam logsumexp ctcLoss recLabelingProb ctcLabelingProb emptyCache testLoss concat_all_gather MoCo HumanNonhumanDataset MoCoDataset use_aug_seq InceptionI3d MaxPool3dSamePadding InceptionModule weights_init Simple3DEncoder Unit3D read_split AverageMeter accuracy save_checkpoint ProgressMeter adjust_learning_rate main_worker main train C3D_TCN Chomp1d TemporalConvNet TemporalBlock generate_finegym read_finegym_info read_split generate_standard max sum all log items sorted log logsumexp shape make_new_beam range append range aug_seq len inst_num class_num ClassBalancedSampler DataLoader fill_ ones size out_channels normal_ sqrt zero_ __name__ _ppf array len kaiming_normal_ weight rstrip readlines close append exists open new_full dtype range int join unsqueeze cat new_full int ctc_probability max range mean svd t expand_as append ctc_decode range len ctc_probability shape append argmax array range load join load_state_dict checkpoint append wordToLabelSeq emptyCache shape extendByBlanks len print ctcLabelingProb array str cat all_gather seed int world_size spawn multiprocessing_distributed warn device_count manual_seed main_worker parse_args gpu workers MoCo multiprocessing_distributed batch_size SGD DistributedDataParallel DataLoader adjust_learning_rate save_checkpoint arch cuda moco_k mlp moco_m set_device DistributedSampler rank MCDset load_state_dict range format init_process_group start_epoch distributed lr resume load int print moco_t set_epoch moco_dim parameters isfile train epochs gpu model zero_grad cuda display transpose update size item enumerate time criterion backward AverageMeter accuracy ProgressMeter step gpu len copyfile save param_groups schedule lr cos strip readlines close dict split append open dict read_finegym_info dict | # Semi-supervised Few-shot Atomic Action Recognition This repo contains the codes for our paper "Semi-supervised Few-shot Atomic Action Recognition". Please check our [paper](https://arxiv.org/abs/2011.08410) and [project page](https://sausage-song.github.io/home/FSAA/) for more details.  Our learning strategies are divided into two parts: 1) train an encoder with unsupervised learning; 2) train the action classification module with supervised learning. Regarding the encoder our model provides fine-grained spatial and temporal video processing with high length flexibility, which embeds the video feature and temporally combines the features with TCN. In terms of classification module, our models provides attention pooling and compares the multi-head relation. Finally, the CTC and MSE loss enables our model for time-invariant few shot classification training. # Requirements pytorch >= 1.5.0 torchvision >= 0.6.0 numpy >= 1.18.1 scipy >= 1.4.1 [vidaug](https://github.com/okankop/vidaug) >= 0.1 | 945 |
Scienceseb/Learning-of-Image-Dehazing-Models-for-Segmentation-Tasks | ['image dehazing', 'semantic segmentation'] | ['Learning of Image Dehazing Models for Segmentation Tasks'] | train_DFS.py dataset.py data.py get_val_set get_training_set get_test_set DatasetFromFolder_3 main join join join L1Loss batchSize SEG_NET ndf zero_grad nEpochs ngf get_test_set DataLoader plot_current_losses ArgumentParser features forward cuda checkpoint_seg seed Vgg16 FloatTensor GANLoss get_training_set define_D Adam criterionGAN MSELoss OrderedDict parse_args range define_G cat get_val_set detach val format lamb testing eval output_nc manual_seed float checkpoint enumerate load checkpoint_current criterionMSE criterionL1 backward print Variable add_argument input_nc parameters netG print_network train step len | # Learning-of-Image-Dehazing-Models-for-Segmentation-Tasks PyTorch code for the paper **Learning of Image Dehazing Models for Segmentation Tasks, EUSIPCO 2019** (https://arxiv.org/pdf/1903.01530.pdf)<br/> ## **Approach:**<br/> The generator network receives an image with haze as an input and gives a candidate of a dehazed image as the output. Similar to the single image dehazing model, the generator loss is computed through LGAN , Lpixel, Lpercep and Lseg. . LGAN is the loss function from Isola et al. used to generate fake images. Lpixel is the reconstruction loss between the ground truth for dehazing (a.k.a. the real image) and the fake dehazed image, based on their individual pixel values, allowing the network to produce crisper images. Lpercep is the perceptual loss used for preserving important semantic elements of the image in the output of the generator. The segmentation loss Lseg, is computed by placing the output of the generator (i.e., the dehazed image) into the segmentation network. The obtained segmentation map is then compared to | 946 |
SeerLabs/entity-matching | ['information retrieval'] | ['Cleaning Noisy and Heterogeneous Metadata for Record Linking Across Scholarly Big Datasets'] | IMM.py normalizr.py index_reference_papers.py header_based_model.py HMM/HMM.py CMM.py name_parser.py similarityProfile.py TEM.py jaccard_similarity find_match mystring compare_jaccard_citations get_features citation_model normalize wos_matcher evaluate parse_wos_authors parse_csx_authors parse_wos_authors2 Normalizr get_features SimilarityProfile normalize2 normalize xstr last_letter_type multi_prob_titles load_DF title_metrics print_all_words_DF count_consecutive_puc count_non_ascii prob_title DF_max_min_mean count_words first_letter_type match_special_words find_match load_configuration lower sub str unescape Normalizr HTMLParser union intersection len join split fetchall parse_wos_authors cursor compare_jaccard_citations query calcFeatureVector connect add get value set nlevenshtein execute task_done empty join split parse_csx_authors len JoinableQueue Process join put start append range print split open round len join strip dict findall append split dict join findall join dict findall append split punctuation set punctuation replace translate lower unidecode sub maketrans len xstr bool defaultdict range len range range index split append sort word_tokenize print rstrip close transform_vec append open dict open split Elasticsearch fetchone Q print predict Search | # entity-matching Match entities between CiteSeerX and other digital libraries In this project, we attempt to develop a machine learning (ML) based method to match paper entities between CiteSeerX and other digital libraries, including but not limited to the IEEE Xplore (IEEE hereafter), DBLP, Web of Science (WoS hereafter). Like most ML-based methods, data preprocessing takes substantial efforts. The purpose of creating this codebase is to centralize working programs that accomplish different tasks so they can be reused for future people that take over corresponding roles. Models: 1. [HMM](HMM.py) (Header Matching Model): This model tries to match paper entities across data bases using information existing in the header of the papers including title, abstract, list of authors and venue. This model is used for matching CiteSeerX to digital libraries without citation information such as DBLP, IEEE and Medline. [HMM_readme](HMM/HMM_README.txt) shows the details for running HMM model. 2. [CMM](CMM.py) (Citation Matching Model): This model leverages citations for matching of the papers if citation information exists. 3. [TEM](TEM.py) (Title Evaluation Model): This model evaluates quality of the title. If title has a high quality, HMM model is used otherwise combination of CMM and HMM model is applied for the matching process. 4. [IMM](IMM.py) (Integrated Matching Model): This model integrates HMM and CMM with the help of TEM. ----------------------------------------------------------------------------------------------------- | 947 |
Semanti1/cnngeometric_pytorch | ['geometric matching'] | ['Convolutional neural network architecture for geometric matching'] | train.py eval_pf.py demo.py geotnf/transformation.py image/normalization.py util/train_test_fn.py data/download_datasets.py demo_orig.py aff_transform.py geotnf/point_tnf.py model/cnn_geometric_model.py data/pf_dataset.py data/synth_dataset.py model/loss.py util/torch_util.py correct_keypoints download_PF_willow download_pascal PFDataset SynthDataset PointTnf PointsToUnitCoords PointsToPixelCoords TpsGridGen SynthPairTnf AffineGridGen GeometricTnf normalize_image NormalizeImageDict CNNGeometric FeatureExtraction FeatureL2Norm FeatureRegression FeatureCorrelation TransformedGridLoss str_to_bool save_checkpoint BatchTensorToVars train test squeeze numel expand_as le sum join remove basename print extractall close urlopen ZipFile makedirs join remove basename print extractall close urlopen open makedirs NormAxis clone expand_as NormAxis clone expand_as isinstance Variable size add expand div unsqueeze cuda is_cuda join basename copyfile dirname save makedirs data format model pair_generation_tnf backward print cpu zero_grad item loss_fn step enumerate len format model pair_generation_tnf print eval loss_fn enumerate | # CNNGeometric PyTorch implementation  This is the implementation of the paper: I. Rocco, R. Arandjelović and J. Sivic. Convolutional neural network architecture for geometric matching. CVPR 2017 [[website](http://www.di.ens.fr/willow/research/cnngeometric/)][[arXiv](https://arxiv.org/abs/1703.05593)] using PyTorch ([for MatConvNet implementation click here](https://github.com/ignacio-rocco/cnngeometric_matconvnet)). If you use this code in your project, please cite use using: ```` @InProceedings{Rocco17, author = "Rocco, I. and Arandjelovi\'c, R. and Sivic, J.", title = "Convolutional neural network architecture for geometric matching", | 948 |
SenWu/dauphin | ['text classification', 'data augmentation'] | ['On the Generalization Effects of Linear Transformations in Data Augmentation'] | dauphin/image/models/shake_drop.py dauphin/image/transforms/auto_contrast.py dauphin/image/utils.py dauphin/image/transforms/cutout.py dauphin/utils.py dauphin/image/transforms/normalize.py dauphin/text/transforms/back_translate.py dauphin/image/transforms/translate_y.py dauphin/text/augment_policy.py dauphin/image/transforms/random_resize_crop.py dauphin/image/augment_policy.py dauphin/text/transforms/unif_rep.py dauphin/text/datasets/__init__.py dauphin/image/models/shake_shake.py dauphin/text/transforms/transform.py dauphin/image/transforms/mixup.py dauphin/text/datasets/utils.py dauphin/image/models/wide_resnet.py dauphin/image/scheduler.py dauphin/image/transforms/compose.py dauphin/image/transforms/solarize.py dauphin/text/transforms/tfidf_word_rep.py dauphin/image/models/mlp.py dauphin/image/datasets/mnist_dataset.py dauphin/image/transforms/transform.py dauphin/text/data.py dauphin/image/modules/soft_cross_entropy_loss.py dauphin/text/task.py dauphin/text/text.py dauphin/image/transforms/smooth.py dauphin/image/models/pyramidnet.py dauphin/image/models/__init__.py dauphin/image/config.py dauphin/image/transforms/vertical_flip.py dauphin/image/transforms/color.py dauphin/image/transforms/to_tensor.py dauphin/image/transforms/utils.py dauphin/image/image.py dauphin/text/transforms/compose.py dauphin/image/transforms/center_crop.py dauphin/image/transforms/horizontal_filp.py dauphin/image/datasets/__init__.py dauphin/image/datasets/cifar_dataset.py dauphin/text/config.py dauphin/image/transforms/identity.py dauphin/text/scheduler.py dauphin/image/transforms/brightness.py dauphin/image/transforms/shear_x.py dauphin/image/transforms/translate_x.py dauphin/image/transforms/blur.py dauphin/image/transforms/rotate.py dauphin/image/transforms/invert.py dauphin/text/datasets/imdb_dataset.py dauphin/text/modules/bert_model.py dauphin/image/transforms/shear_y.py dauphin/image/transforms/sharpness.py dauphin/image/transforms/__init__.py dauphin/image/transforms/equalize.py dauphin/image/transforms/contrast.py dauphin/image/transforms/resize.py dauphin/image/transforms/posterize.py dauphin/text/transforms/__init__.py dauphin/image/transforms/random_crop.py setup.py dauphin/text/transforms/to_tensor.py dauphin/text/transforms/utils.py dauphin/image/models/shake_shake_function.py dauphin/image/data.py dauphin/image/task.py load_json write_to_file write_to_json_file parse_sequence Augmentation parse_transform get_dataloaders main AugScheduler output_classification sce_loss create_task relu_model default_loader accimage_loader pil_loader CIFARDataset MNISTDataset MLP conv3x3 BasicBlock PyramidNet Bottleneck ShakeDropFunction ShakeDrop initialize_weights ShakeShake ResidualPath DownsamplingShortcut BasicBlock ShakeFunction get_alpha_beta conv_init conv3x3 wide_basic WideResNet SoftCrossEntropyLoss AutoContrast Blur Brightness CenterCrop Color Compose Contrast Cutout Equalize HorizontalFlip Identity Invert Mixup Normalize Posterize RandomCrop RandomResizedCrop Resize Rotate Sharpness ShearX ShearY Smooth Solarize ToTensor DauphinTransform TranslateX TranslateY categorize_value VerticalFlip parse_sequence Augmentation parse_transform get_dataloaders AugScheduler output_classification sce_loss create_task main ImdbDataset build_vocab clean_web_text get_data_stats BertModule BackTranslate Compose TfIdfWordRep ToTensor DauphinTransform UnifRep EfficientRandomGen str makedirs write close dirname open list isinstance write dumps close dirname item open keys makedirs split int parse_sequence str bool tuple random startswith append randint float split EmmentalDataLoader task list items info append config EmmentalModel score save write_to_file argv Augmentation len AugScheduler log_path range augment_policy create_task learn write_to_json_file EmmentalLearner init info parse_args_to_config load join get_dataloaders train add_task scatter_ size view new_zeros shake_shake_shake_backward pyramidnet_bottleneck task pyramidnet_depth pyramidnet_alpha mlp_hidden_dim wide_resnet_depth final_featuremap_dim shake_shake_depth wide_resnet_dropout shake_shake_shake_image shake_shake_base_channels shake_shake_shake_forward feature_size info wide_resnet_width prod reshape data isinstance fill_ Conv2d zero_ BatchNorm2d kaiming_normal_ Linear to rand view FloatTensor bias xavier_uniform_ weight __name__ constant_ BertModule model deepcopy defaultdict text len log range split text add_to_vocab split range len print replace find | SenWu/dauphin | 949 |
Seojiyoung/Depth-Map-Estimation | ['depth estimation', 'monocular depth estimation'] | ['Revisiting Single Image Depth Estimation: Toward Higher Resolution Maps with Accurate Object Boundaries'] | train.py loaddata_demo.py demo.py loaddata.py nyu_transform.py senet.py demo_transform.py test.py util.py sobel.py net.py modules.py resnet.py densenet.py video.py main test define_model _is_numpy_image CenterCrop ToTensor Scale Normalize _is_pil_image densenet161 DenseNet _DenseLayer _DenseBlock _Transition getTrainingData getTestingData depthDataset depthDataset readNyu2 MFF R E_densenet D E_senet _UpProjection E_resnet model Lighting Grayscale _is_numpy_image Saturation CenterCrop RandomRotate Contrast ToTensor Brightness Scale Normalize RandomHorizontalFlip _is_pil_image RandomOrder ColorJitter ResNet resnet50 Bottleneck resnet152 conv3x3 resnet34 resnet18 BasicBlock resnet101 se_resnext50_32x4d senet154 SENet SEResNetBottleneck SEBottleneck SEResNeXtBottleneck initialize_pretrained_model Bottleneck se_resnet152 se_resnet50 se_resnext101_32x4d SEModule se_resnet101 Sobel lg10 evaluateError averageErrors nValid addErrors getNanMask setNanToZero maxOfTwo nNanElement main define_model densenet161 senet154 model resnet50 E_densenet E_senet E_resnet load define_model test eval load_state_dict cuda readNyu2 model numpy cuda imsave enumerate load_url load_state_dict DenseNet DataLoader depthDataset DataLoader depthDataset DataLoader depthDataset load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url load_state_dict initialize_pretrained_model SENet initialize_pretrained_model SENet initialize_pretrained_model SENet initialize_pretrained_model SENet initialize_pretrained_model SENet initialize_pretrained_model SENet lt clone clone getNanMask nValid lg10 sum pow div numpy setNanToZero float abs maxOfTwo VideoCapture model DataLoader unsqueeze release fromarray COLOR_BGR2RGB imshow normalize expand_dims NORM_MINMAX enumerate read uint8 isOpened numpy array cvtColor | Rock chip이 업데이트 된다면, 사용가능한 예제 입니다.<br> (현재 지원되지 않는 함수 때문에 rknn 변환 실패) - + Single Image Depth Map + Revisiting Single Image Depth Estimation: Toward Higher Resolution Maps with Accurate Object Boundaries + 논문 : Junjie Hu, Mete Ozay, Yan Zhang, Takayuki Okatani https://arxiv.org/abs/1803.08673 결과 -   | 950 |
Septembit/Image-segmentation | ['semantic segmentation'] | ['The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation'] | segmentation/test.py segmentation/layers.py segmentation/cal_weight.py segmentation/dataloader.py segmentation/train.py segmentation/gen_list.py segmentation/tiramisu.py segmentation/utils.py datalist dataTestlist testLoader randomCrop dataloader gen_images gen_drivable gen_list TransitionDown center_crop Bottleneck DenseLayer DenseBlock TransitionUp main FCDenseNet57 FCDenseNet103 FCDenseNet FCDenseNet67 main error test save_weights load_weights adjust_learning_rate weights_init get_predictions train predict randint size crop join sorted format print makedirs len join gen_list gen_images gen_list size FCDenseNet67 DataLoader ArgumentParser resize save cuda open fromarray OrderedDict load_state_dict parse_args format readlines close fcd enumerate load items testLoader print reshape ANTIALIAS add_argument get_predictions zero_grad save_weights adjust_learning_rate FloatTensor Adam to range epoch lr item dataTrainloader time criterion backward cpu error parameters step len copyfile join save print format load load_state_dict data size max view size sum numpy criterion model Variable backward zero_grad get_predictions step cuda enumerate model print set_device readlines close eval save cuda enumerate open param_groups isinstance kaiming_uniform Conv2d zero_ weight model Variable eval get_predictions cuda append | # Image Segmentation This code is implemented based on https://arxiv.org/pdf/1611.09326.pdf and used to do image segmentation by improved Unet. ## Usage python train.py -b -lr ## Results To be continued.. | 951 |
SeungyounShin/EXTD | ['face detection'] | ['EXTD: Extremely Tiny Face Detector via Iterative Filter Reuse'] | layers_S3FD/functions/detection.py data/factory.py data/egohand.py prepare_hand_dataset.py layers_S3FD/functions/__init__.py s3fd.py layers/__init__.py tools/eval_head.py tools/eval_hand.py train.py layers_S3FD/__init__.py prepare_wider_data.py layers/functions/detection.py layers/bbox_utils.py layers/modules/l2norm.py data/config.py tools/pascal_test.py layers/functions/__init__.py utils/augmentations.py train_extd.py layers_S3FD/modules/__init__.py data/config_EXTD.py layers/modules/multibox_loss.py layers_S3FD/functions/prior_box.py data/vochead.py layers_S3FD/bbox_utils.py demo.py EXTD_cpu.py data/widerface.py layers_S3FD/modules/multibox_loss.py tools/wider_test.py tools/afw_test.py tools/anchor_matching_test.py tools/fddb_test.py layers/modules/__init__.py layers_S3FD/modules/l2norm.py EXTD.py layers/functions/prior_box.py tools/detect.py detect EXTD bbox_predictor inverted_residual_1 upsample class_predictor build_extd feature_extractor backbone inverted_residual_2 EXTD bbox_predictor inverted_residual_1 upsample class_predictor build_extd feature_extractor backbone inverted_residual_2 generate_file parse_wider_file wider_data_file build_s3fd multibox S3FD add_extras vgg val train adjust_learning_rate str2bool val train adjust_learning_rate str2bool HandDetection dataset_factory detection_collate VOCDetection VOCAnnotationTransform WIDERDetection detection_collate decode nms intersect match_ssd log_sum_exp jaccard center_size match point_form encode Detect PriorBox L2Norm decode nms intersect match_ssd log_sum_exp jaccard center_size match point_form encode Detect PriorBox L2Norm detect_face anchor_match_count save_pkl anchor_match_ssd_count all_np plot_anchor_match dyna_anchor test_net evaluate_detections get_voc_results_file_template voc_ap write_voc_results_file Timer voc_eval do_python_eval get_output_dir parse_rec test_net evaluate_detections get_voc_results_file_template voc_ap write_voc_results_file Timer voc_eval do_python_eval get_output_dir detect_face detect_face bbox_vote multi_scale_test flip_test get_data detect_face data imwrite to_chw_bgr unsqueeze resize save_dir cuda open basename IMREAD_COLOR shape imread range format size astype sqrt net join time Variable print convert rectangle Tensor numpy array format len write close unique array open int strip xrange split append enumerate format len write close TRAIN_FILE VAL_FILE parse_wider_file xrange open Conv2d enumerate enumerate vgg add_extras multibox data save_folder zero_grad adjust_learning_rate save dataset cuda repr range state_dict val format gamma net enumerate join time criterion backward Variable print EPOCHES step join time format state_dict criterion Variable print save_folder repr eval save dataset cuda net enumerate param_groups lr close add_scalar WIDERDetection VAL_FILE TRAIN_FILE VOCDetection DIR append FloatTensor clamp size min expand max intersect expand_as squeeze_ lt sort size jaccard clone gt index_fill_ eq point_form encode sum max range squeeze_ size jaccard index_fill_ point_form encode max range log cat max mul sort new clamp index_select resize_as_ long data int Variable size astype copy shape to_chw_bgr unsqueeze cuda Tensor range max net decode OVERLAP_THRESH all_np INPUT_SIZE view shape expand_as range LongTensor concatenate astype sqrt enumerate int print clone match Tensor len decode int LongTensor view print match_ssd concatenate clone astype sqrt shape all_np expand_as Tensor INPUT_SIZE range enumerate len size array unique anchor_match_count dump close anchor_match_ssd_count open show subplot plot xlabel set_yticks grid ylabel save_pkl figure legend zeros array range len arange concatenate size maximum sum max range cumsum argmax max range eps format astype mkdir float enumerate minimum join print sort maximum voc_ap argsort zeros array len join makedirs join makedirs print format get_voc_results_file_template enumerate join format print get_voc_results_file_template mean boxes fnames mkdir voc_eval enumerate data imwrite to_chw_bgr unsqueeze resize cuda view shape tic range format evaluate_detections size astype sqrt pull_image toc join Variable print float32 t rectangle numpy get_output_dir len write_voc_results_file do_python_eval int parse findall text append find parse_rec sum bool resize numpy column_stack detect_face flip zeros shape detect_face row_stack minimum maximum delete row_stack tile zeros sum max join WIDER_DIR loadmat format | # EXTD
**Extremely Tiny Face Detector via Iterative Filter Reuse**
https://arxiv.org/pdf/1906.06579.pdf
<p align="center">
<img width="880" height="400" src="https://raw.githubusercontent.com/SeungyounShin/EXTD/master/img/overall_framework.png">
</p>
## implementation
| 952 |
SforAiDl/Neural-Voice-Cloning-With-Few-Samples | ['speech synthesis'] | ['Neural Voice Cloning with a Few Samples'] | dv3/hparams.py Modules/MultiHeadAttention.py dv3/deepvoice3_pytorch/frontend/en/__init__.py dv3/audio.py dv3/jsut.py dv3/deepvoice3_pytorch/frontend/text/numbers.py dv3/deepvoice3_pytorch/modules.py dv3/deepvoice3_pytorch/frontend/text/__init__.py dv3/train.py Modules/CloningSamplesAttention.py dv3/deepvoice3_pytorch/frontend/text/cmudict.py dv3/vctk.py dv3/tests/test_frontend.py Modules/TemporalProcessing.py dv3/preprocess.py dv3/vctk_preprocess/prepare_vctk_labels.py dv3/compute_timestamp_ratio.py dv3/deepvoice3_pytorch/frontend/__init__.py dv3/__init__.py dv3/vctk_preprocess/extract_feats.py dv3/lrschedule.py train_encoder.py dv3/deepvoice3_pytorch/frontend/text/cleaners.py dv3/tests/test_nyanko.py dv3/deepvoice3_pytorch/__init__.py dv3/ljspeech.py utils.py dv3/synthesis.py dv3/tests/test_deepvoice3.py train_whole.py train_dv3.py dv3/deepvoice3_pytorch/frontend/jp/__init__.py dv3/vctk_preprocess/prepare_htk_alignments_vctk.py Modules/Conv1dGLU.py dv3/deepvoice3_pytorch/version.py dv3/setup.py dv3/deepvoice3_pytorch/conv.py speaker_adaptation.py dv3/tests/test_conv.py speaker_adaptatation-libri.py dv3/deepvoice3_pytorch/nyanko.py dv3/deepvoice3_pytorch/frontend/text/symbols.py dv3/deepvoice3_pytorch/deepvoice3.py setup.py dv3/tests/test_embedding.py Modules/Attention.py Encoder.py Modules/SpectralProcessing.py dv3/deepvoice3_pytorch/builder.py Encoder develop build_py logit time_string spec_loss save_checkpoint save_alignment save_states sequence_mask MaskedL1Loss restore_parts _pad_2d guided_attention MelSpecDataSource _load_embedding guided_attentions plot_alignment eval_model _pad build_model masked_mean prepare_spec_image PartialyRandomizedSimilarTimeLengthSampler load_checkpoint TextDataSource LinearSpecDataSource PyTorchDataset collate_fn train _NPYDataSource logit time_string spec_loss save_checkpoint save_alignment save_states sequence_mask MaskedL1Loss restore_parts _pad_2d guided_attention MelSpecDataSource _load_embedding guided_attentions plot_alignment eval_model _pad build_model masked_mean prepare_spec_image PartialyRandomizedSimilarTimeLengthSampler load_checkpoint TextDataSource LinearSpecDataSource PyTorchDataset collate_fn train _NPYDataSource logit time_string spec_loss save_checkpoint save_alignment save_states sequence_mask MaskedL1Loss restore_parts _pad_2d guided_attention MelSpecDataSource _load_embedding guided_attentions plot_alignment eval_model _pad build_model masked_mean prepare_spec_image PartialyRandomizedSimilarTimeLengthSampler load_checkpoint TextDataSource LinearSpecDataSource PyTorchDataset collate_fn train _NPYDataSource train_encoder load_checkpoint build_encoder save_checkpoint download_file my_collate get_speaker_embeddings get_cloned_voices generate_cloned_samples tts Speech_Dataset visualize _build_mel_basis preemphasis load_wav save_wav _denormalize inv_preemphasis melspectrogram _lws_processor _linear_to_mel _db_to_amp _amp_to_db inv_spectrogram spectrogram _normalize hparams_debug_string _process_utterance build_from_path _process_utterance build_from_path cyclic_cosine_annealing noam_learning_rate_decay step_learning_rate_decay preprocess write_metadata develop build_py tts logit time_string spec_loss save_checkpoint save_alignment save_states sequence_mask MaskedL1Loss restore_parts _pad_2d guided_attention MelSpecDataSource _load_embedding guided_attentions plot_alignment eval_model _pad build_model masked_mean prepare_spec_image PartialyRandomizedSimilarTimeLengthSampler load_checkpoint TextDataSource LinearSpecDataSource PyTorchDataset collate_fn train _NPYDataSource _process_utterance end_at build_from_path start_at build_deepvoice_3 deepvoice3_multispeaker deepvoice3 nyanko Conv1d Decoder Encoder AttentionLayer Converter expand_speaker_embed get_mask_from_lengths ConvTranspose1d Embedding sinusoidal_encode position_encoding_init Conv1dGLU HighwayConv1d Conv1d SinusoidalEncoding GradMultiply Linear Converter Decoder _clear_modules Encoder AttentionSeq2Seq MultiSpeakerTTSModel mix_pronunciation _maybe_get_arpabet text_to_sequence normalize_delimitor _yomi text_to_sequence _mix_pronunciation sequence_to_text add_punctuation mix_pronunciation lowercase english_cleaners expand_abbreviations collapse_whitespace basic_cleaners add_punctuation convert_to_ascii transliteration_cleaners expand_numbers _parse_cmudict _get_pronunciation CMUDict normalize_numbers _expand_dollars _expand_ordinal _expand_decimal_point _expand_number _remove_commas text_to_sequence _clean_text _symbols_to_sequence _should_keep_symbol sequence_to_text _arpabet_to_sequence test_conv1d_incremental test_incremental_correctness _pad test_single_speaker_deepvoice3 _deepvoice3 _get_model test_multi_speaker_deepvoice3 _pad_2d test_incremental_forward _test_data test_sinusoidal test_en test_ja_jsut test_en_lj test_ja test_nyanko test_incremental_correctness _pad test_nyanko_basics _pad_2d _test_data load_binary_file_frame replace_conflines replace_write extract_final_features generate_merlin_wav subfolder_select pwrap array_to_binary_file extract_intermediate_features pe save_numpy_features load_binary_file execute get_reconstructions copytree json2hts on_progress write_hts_label do Attention CloningSamplesAttention Conv1dGLU MultiHeadAttention SpectralProcessing PreNet Temp_Masking TemporalProcessing pad subplots xlabel close ylabel colorbar tight_layout imshow savefig max Variable size expand cuda expand_as long is_cuda max LongTensor FloatTensor downsample_step expand unsqueeze array outputs_per_step len T plot_alignment min max flip join tts format save_wav save_alignment enumerate makedirs join T format _denormalize save_wav print min makedirs prepare_spec_image inv_spectrogram numpy save_alignment enumerate len expand_as L1Loss masked_l1 logit exp masked_mean l1 Variable MaskedL1Loss masked_loss_weight mean zero_ log zeros exp range T zeros max range len model zero_grad spec_loss unsqueeze priority_freq binary_divergence_weight save_checkpoint seq2seq cuda save_states binary_criterion view step from_numpy getattr guided_attentions linear_dim outputs_per_step format eval_model param_groups lrschedule size downsample_step mean clip_grad_norm BCELoss lr_schedule enumerate int lr_schedule_f backward Variable print contiguous postnet tqdm numpy get_trainable_parameters len join format print postnet save seq2seq print format load load_state_dict update format print load_state_dict state_dict uint8 T add_image add_audio viridis prepare_spec_image uint8 add_image add_audio viridis float add_scalar generate_cloned_samples print data Encoder state_dict int from_numpy stack unsqueeze cat zeros array amax enumerate criterion FloatTensor Variable backward print step zero_grad save_checkpoint encoder type range enumerate download visualize Audio display _tts subplot T xlabel ylabel colorbar tight_layout imshow figure specshow format _tts print shape splitlines append range int16 sample_rate write astype _amp_to_db ref_level_db T abs T _denormalize run_lws _lws_processor astype float32 _db_to_amp ref_level_db power _amp_to_db T _linear_to_mel abs _build_mel_basis values submit partial ProcessPoolExecutor zip collect_files append enumerate load int load_wav join T replace sample_rate astype float32 trim save exists minimum float build_from_path write_metadata makedirs print sample_rate hop_size sum max T _denormalize text_to_sequence Variable model make_generation_fast_ eval unsqueeze array long inv_spectrogram numpy cuda available_speakers labels TranscriptionDataSource range len range len start_at end_at build_model load_checkpoint sample_rate getattr hop_size add_hparam AttentionSeq2Seq Decoder Encoder Converter max MultiSpeakerTTSModel AttentionSeq2Seq Decoder Encoder Converter MultiSpeakerTTSModel AttentionSeq2Seq Decoder Encoder Converter max MultiSpeakerTTSModel size expand float array cos sin clone cos sin normal_ zero_ normal_ sqrt normal_ zero_ sqrt zero_ normal_ zero_ enumerate clear_buffer join join mix_pronunciation append split Tagger _yomi parse replace normalize_delimitor replace add_punctuation hira2kata normalize sub lowercase collapse_whitespace lowercase convert_to_ascii collapse_whitespace lowercase expand_abbreviations collapse_whitespace add_punctuation convert_to_ascii expand_numbers append _get_pronunciation sub split split group split int group sub append match group len cleaner getattr deepvoice3 LongTensor Variable size rand array max AttentionSeq2Seq Decoder Encoder Converter MultiSpeakerTTSModel model _test_data _get_model LongTensor model Variable print size rand _get_model array max abs max embed_speakers view from_numpy _pad_2d encoder LongTensor size start_fresh_sequence mean eval incremental_forward load decoder Variable reshape print array len _plot make_generation_fast_ abs max cuda view from_numpy _pad_2d load_state_dict dirname encoder LongTensor size start_fresh_sequence mean eval incremental_forward load join decoder Variable reshape print _get_model array len Variable Embedding print position_encoding_init mean SinusoidalEncoding numpy long getattr sequence_to_text text_to_sequence getattr sequence_to_text text_to_sequence format text_to_sequence print sequence_to_text TranscriptionDataSource getattr collect_files trange len format text_to_sequence print sequence_to_text TranscriptionDataSource getattr collect_files trange len model _test_data nyanko nyanko _plot seq2seq abs max view from_numpy _pad_2d encoder LongTensor size start_fresh_sequence mean eval incremental_forward load decoder Variable reshape print postnet nyanko numpy array len enumerate replace_conflines join remove lexists isdir copystat readlink st_mode symlink ignore lstat lchmod copy2 listdir S_IMODE makedirs Popen readline wait close pwrap iter append print execute fromfile reshape close open tofile array close open reshape size close fromfile open strip exists open str replace_write getcwd escape pe writelines append format chdir glob close mkdir listdir enumerate int read format_info_tup print write symlink embed str chdir getcwd print min replace_write pe symlink mkdir abspath sleep listdir copytree len tuple abspath copy2 savez_compressed seed str list sorted load_binary_file_frame getcwd append next range astype shuffle set lower mkdir zip enumerate remove print min symlink array len join load_binary_file_frame str format chdir getcwd array_to_binary_file pe mkdir abspath dirname keys range print reshape generate_merlin_wav load debug items print append str float print wait Popen | # Neural-Voice-Cloning-with-Few-Samples Implementation of the paper titled "Neural Voice Cloning with Few Samples" by Baidu [link](https://arxiv.org/pdf/1802.06006) The repository is partially complete. # To Do 1. Temporal Mask Layer In Encoder 2. Merging Encoder and Deep Voice 3 # Acknowledgements The implementation of deep voice 3 was done from the following repository: https://github.com/r9y9/deepvoice3_pytorch | 953 |
ShangxuanWu/CycleGAN-Face-off | ['style transfer'] | ['CycleGAN Face-off'] | make_video/qi.py options/train_options.py data/image_folder.py util/visualizer.py data/aligned_dataset.py data/custom_dataset_data_loader.py data/data_loader.py train.py generate_fake_sequence.py util/image_pool.py util/png.py util/get_data.py make_video/wu.py models/base_model.py models/ssim_backup/setup.py models/models.py util/html.py models/ssim.py data/base_data_loader.py options/base_options.py test.py make_video/jin.py models/ssim_backup/max_ssim.py data/base_dataset.py util/util.py datasets/combine_A_and_B.py models/ssim_backup/pytorch_ssim/__init__.py models/networks.py models/test_model.py data/unaligned_dataset.py models/cycle_gan_model.py data/single_dataset.py options/test_options.py draw_plot/draw_plot.py models/pix2pix_model.py main mkdir_if_not_exist AlignedDataset BaseDataset get_transform __scale_width BaseDataLoader CustomDatasetDataLoader CreateDataset CreateDataLoader is_image_file ImageFolder default_loader make_dataset SingleDataset UnalignedDataset read_log read_log_d2 main main main BaseModel CycleGANModel create_model get_norm_layer GANLoss ResnetGenerator weights_init_orthogonal ResnetBlock weights_init_normal weights_init_xavier define_D UnetGenerator define_G init_weights UnetSkipConnectionBlock get_scheduler print_network NLayerDiscriminator weights_init_kaiming Pix2PixModel create_window gaussian _ssim SSIM ssim TestModel create_window gaussian _ssim SSIM ssim BaseOptions TestOptions TrainOptions GetData HTML ImagePool encode print_numpy varname diagnose_network mkdirs mkdir info save_image tensor2im Visualizer rmtree makedirs join replace VideoWriter_fourcc concatenate print getcwd resize system write VideoWriter imread mkdir_if_not_exist range release len fineSize Lambda Scale RandomCrop BICUBIC RandomHorizontalFlip append int size initialize name UnalignedDataset print AlignedDataset SingleDataset CustomDatasetDataLoader name print initialize is_image_file join sorted append walk VideoCapture read LINE_AA isOpened putText destroyAllWindows FONT_HERSHEY_SIMPLEX append initialize CycleGANModel model print name TestModel Pix2PixModel data uniform constant __name__ data constant xavier_normal uniform __name__ data constant uniform __name__ kaiming_normal data constant print orthogonal uniform __name__ print apply BatchNorm2d partial InstanceNorm2d LambdaLR ReduceLROnPlateau StepLR get_norm_layer ResnetGenerator UnetGenerator init_weights cuda init_weights NLayerDiscriminator cuda get_norm_layer print parameters Tensor Variable contiguous unsqueeze pow conv2d create_window size type_as get_device cuda is_cuda transpose numpy tile print parameters fromarray save print join search print float64 flatten astype mkdir makedirs | ## CycleGAN Face-off This is a modified version of CycleGAN for the paper "CycleGAN Face-off" by Xiaohan Jin, Ye Qi and Shangxuan Wu. [Paper](https://arxiv.org/abs/1712.03451). Note that this is not the repo for the video in [here](https://www.youtube.com/watch?v=Fea4kZq0oFQ). This repo contains the code which adds: 1. SSIM loss. Usage: adding `--with_ssim` in your training script. 1. Better testing script for generating a whole video. Usage: `python generate_fake_sequence.py` 1. Training scripts for some of the experiments like Shangxuan <-> Russ etc. 1. For weighted loss of facial mask, please use [this repo](https://github.com/Sharon-Jin/pytorch-CycleGAN-and-pix2pix). 1. For double_d setting, please use [this_repo](https://github.com/CharlotteKay/pytorch-CycleGAN-and-pix2pix). 1. Adding make_video folder for making a comparison video (such as make_video/jin/output_with_sould.mkv). Usage: `python jin.py`. The result video is [here](https://github.com/ShangxuanWu/pytorch-CycleGAN-and-pix2pix/blob/master/make_video/qi/demo.mkv). | 954 |
Shantanu48114860/Deep-Counterfactual-Networks-with-Propensity-Dropout | ['selection bias', 'causal inference'] | ['Deep Counterfactual Networks with Propensity-Dropout'] | DCN_network.py zz/main_propensity_dropout.py dataloader.py DCN.py Propensity_socre_network.py Utils.py zz/DCN.py main_propensity_dropout.py Propensity_net_NN.py zz/DCN_network.py DataLoader DCN DCN_network main_propensity_dropout_BL train_eval_DCN test_DCN Propensity_net_NN Propensity_socre_network Utils DCN DCN_network main_propensity_dropout_BL train_eval_DCN test_DCN convert_to_tensor convert_to_tensor_DCN Propensity_socre_network print DCN_network eval prepare_tensor_for_DCN train convert_to_tensor convert_to_tensor_DCN format Propensity_socre_network print DCN_network min to_csv eval prepare_tensor_for_DCN sum max len format print preprocess_data_from_csv to_csv OrderedDict DataLoader get_device test_DCN append train_eval_DCN sum range len | ## Introduction: This project is the implementation of the paper: <b>"Deep Counterfactual Networks with Propensity-Dropout"</b> [[arXiv]](https://arxiv.org/pdf/1706.05966.pdf) in pytorch. <br/> Ahmed M. Alaa, Michael Weisz, Mihaela van der Schaar ## Dataset: IHDP dataset will be found in the folder: ["./Dataset"](https://github.com/Shantanu48114860/Deep-Counterfactual-Networks-with-Propensity-Dropout/tree/master/Dataset) ## Abstract "We propose a novel approach for inferring the individualized causal effects of a treatment (intervention) from observational data. Our approach conceptualizes causal inference as a multitask learning problem; we model a subject's potential outcomes using a deep multitask network with a set of shared layers among the factual and counterfactual outcomes, and a set of outcome-specific layers. The impact of selection bias in the observational data is alleviated via a propensity-dropout regularization scheme, in which the network is thinned for every training example via a dropout probability that depends on the associated propensity score. The network is trained in alternating phases, where in each phase we use the training examples of one of the two potential outcomes (treated and control populations) to update the weights of the shared layers and the respective outcome-specific layers. Experiments conducted on data based on a real-world observational study show that our algorithm outperforms the state-of-the-art." <br/> <pre> <i> • Ahmed M. Alaa • Michael Weisz • Mihaela van der Schaar</i></pre> ## Architecture <img src="https://github.com/Shantanu48114860/Deep-Counterfactual-Networks-with-Propensity-Dropout/blob/master/Screen%20Shot%202020-08-13%20at%202.14.36%20AM.png" > | 955 |
ShaogangRen/GAN_Implicit_Likelihood_Estimation | ['anomaly detection'] | ['Estimate the Implicit Likelihoods of GANs with Application to Anomaly Detection'] | model_arrhythmia/options.py model_simulation/evaluate.py model_arrhythmia/evaluate.py model_simulation/options.py model_arrhythmia/utils.py model_arrhythmia/main_arrhythmia.py model_arrhythmia/visualizer.py model_arrhythmia/data.py model_simulation/data.py model_simulation/utils.py model_simulation/visualizer.py model_simulation/sim_2_6.py model_simulation/main_sim.py get_cifar_anomaly_dataset get_cifar_rwllk_dataset get_mnist_rwllk_dataset get_sub_mnist_dataset get_mnist2_anomaly_dataset load_data get_loader get_sub_cifar10_dataset get_mnist_anomaly_dataset roc auprc evaluate generator center_lambda reset_grad inv_generator Var_Net discriminator main InvGAN Options save_images loss_plot load_mnist KL_z_z_hat_v2 IGM_Loss_Geom G_Jacobian_usediag load_celebA G_LLK_Geom_G gaussian IG_Loss_ZZ G_Jacobian IGM_Loss_Geom_alt G_Jacobian_Det G_LLK_Geom imsave get_data_label_batch_iter initialize_weights generate_animation center_lambda Js_Jacobian G_Jacobian_eig merge KL_z_z_hat get_data_batch_iter G_Jacobian_Eigen_Penalty print_network Visualizer get_cifar_anomaly_dataset get_cifar_rwllk_dataset get_mnist_rwllk_dataset get_sub_mnist_dataset get_mnist2_anomaly_dataset load_data get_loader get_sub_cifar10_dataset get_mnist_anomaly_dataset roc auprc evaluate generator center_lambda reset_grad inv_generator Var_Net discriminator main InvGAN Options save_images loss_plot load_mnist KL_z_z_hat_v2 IGM_Loss_Geom G_Jacobian_usediag load_celebA G_LLK_Geom_G gaussian IG_Loss_ZZ G_Jacobian IGM_Loss_Geom_alt G_Jacobian_Det G_LLK_Geom imsave get_data_label_batch_iter initialize_weights generate_animation center_lambda Js_Jacobian G_Jacobian_eig merge KL_z_z_hat get_data_batch_iter G_Jacobian_Eigen_Penalty print_network Visualizer MNIST int get_cifar_anomaly_dataset format CIFAR10 batch_size Compose SVHN get_cifar_rwllk_dataset get_mnist_rwllk_dataset get_sub_mnist_dataset get_mnist2_anomaly_dataset get_loader isize get_sub_cifar10_dataset dataset anomaly_class get_mnist_anomaly_dataset join basename print ImageFolder DataLoader seed int arange concatenate shuffle copy array len array copy concatenate array extend copy concatenate seed int arange clone shuffle from_numpy cat len seed int arange clone shuffle from_numpy cat len sort clone from_numpy append cat from_numpy clone cat numpy join plot xlabel roc_curve close ylabel dict ylim title savefig figure brentq legend xlim auc average_precision_score data zero_ Variable cluster_centers_ fit power labels_ zeros range len parse print name exit train InvGAN view concatenate Variable from_numpy append next range view concatenate Variable from_numpy append next array range det Variable log t netG unsqueeze append mm cuda range cat max norm randn Variable min sqrt netG Tensor sum cuda max norm randn Variable netIG min netG sqrt unsqueeze Tensor sum cuda range cat mul Variable netIG t netG unsqueeze XVar mm cuda range cat mul Variable netIG t netG unsqueeze XVar mm cuda range cat squeeze pdf G_Jacobian_eig numpy sum array log view Variable t unsqueeze XVar mm cuda range cat unsqueeze save XVar cuda log view append range cat isinf det Variable print t netG isnan zeros mm diag pdf eigh numpy unsqueeze cuda log squeeze append sum range cat format isinf Variable print t netG mm array sum Variable diag zeros log t netG eigh unsqueeze numpy isinf append XVar mm cuda range cat det view Variable diag zeros log t netG isnan unsqueeze save isinf append XVar mm cuda range cat tensor normal_ seed join int asarray concatenate FloatTensor reshape transpose extract_data astype shuffle zeros type enumerate ImageFolder DataLoader parameters squeeze merge zeros enumerate append imread mimsave range join plot xlabel grid close ylabel tight_layout savefig legend range len isinstance Conv2d normal_ modules zero_ ConvTranspose2d Linear int std batch_size kl_divergence IG mean Normal append randint cuda range cat enumerate batch_size channels input_size div tensor cuda log view IG add iter append range mean float int get_data_batch_iter pow std | ShaogangRen/GAN_Implicit_Likelihood_Estimation | 956 |
ShaogangRen/Thunder | ['sparse learning'] | ['Thunder: a Fast Coordinate Selection Solver for Sparse Learning'] | data/generate_sparse_sim_data.py | # Thunder Thunder code with the paper [Thunder: a Fast Coordinate Selection Solver for Sparse Learning, NeurIPS2020]. The code has been tested on Ubuntu 16.04.6 LTS. ## Files and Usage ### binary_dump_libsvm binary_dump_libsvm is a program that converts the libsvm dataset into a binary file for fast loading of lassolver. Usage: ``` ./binary_dump_libsvm <n> <p> <input_libsvm_data> <output_binary_data> | 957 |
ShaojieJiang/FACE | ['response generation'] | ['Improving Neural Response Diversity with Frequency-Aware Cross-Entropy Loss'] | face/modules.py face/face.py face/__init__.py FaceAgent OutputLayer _transpose_hidden_state RNNEncoder HLoss RNNDecoder UnknownDropout AttentionLayer Identity Seq2seq opt_to_kwargs is_tensor isinstance | # Intro Frequency-Aware Cross-Entropy (FACE) is a simple yet effective algorithm that helps to improve the response diversity of Seq2Seq-based chatbots. The main idea is to assign token frequency-based weights to cross-entropy loss function, so as to suppress meaningless high-frequency tokens, which we believe to have caused generic responses like _"I don't know"_. Read our [paper](https://arxiv.org/abs/1902.09191) for more details. This repo contains the official implementation of the models proposed in paper **Improving Neural Response Diversity with Frequency-Aware Cross-Entropy Loss**, together with the data we used for experiments. ## Requirements To use this programme, you need [PyTorch](https://pytorch.org/) 1.0.0 (tested with Python 3.6+) and the latest version of [ParlAI](https://github.com/facebookresearch/ParlAI) framework (tested on commit `c6745203`). | 958 |
ShapeAI/OneShotLearning-with-Siamese- | ['few shot learning'] | ['Generalizing from a Few Examples: A Survey on Few-Shot Learning'] | inception_blocks_v2.py face_recognizer.py VideoStreamingFlask/camera.py VideoStreamingFlask/main.py fr_utils.py camera.py main.py VideoCamera concatenate LRN2D variable square load_weights_from_FaceNet img_path_to_encoding shape conv2d_bn load_weights img_to_encoding load_dataset zeros inception_block_2b faceRecoModel inception_block_3a inception_block_3b inception_block_1a inception_block_1c inception_block_1b inception_block_2a gen video_feed index VideoCamera gen video_feed index Variable asarray initializer run get_shape len load_weights set_weights genfromtxt reshape transpose filter listdir reshape File array imread transpose around resize array predict concatenate concatenate conv2d_bn concatenate conv2d_bn concatenate conv2d_bn concatenate conv2d_bn concatenate conv2d_bn concatenate inception_block_2b inception_block_3a inception_block_3b Model inception_block_1a inception_block_1c inception_block_1b Input inception_block_2a get_frame | # OneShotLearning-with-Siamese Aim of this project is to idenitfy similar face charachterstics such as marvel DC_heros,Harry potter,etc | 959 |
ShapeAI/OneShotLearning-with-Siamese-character_classification | ['few shot learning'] | ['Generalizing from a Few Examples: A Survey on Few-Shot Learning'] | inception_blocks_v2.py face_recognizer.py VideoStreamingFlask/camera.py VideoStreamingFlask/main.py fr_utils.py camera.py main.py VideoCamera concatenate LRN2D variable square load_weights_from_FaceNet img_path_to_encoding shape conv2d_bn load_weights img_to_encoding load_dataset zeros inception_block_2b faceRecoModel inception_block_3a inception_block_3b inception_block_1a inception_block_1c inception_block_1b inception_block_2a gen video_feed index VideoCamera gen video_feed index Variable asarray initializer run get_shape len load_weights set_weights genfromtxt reshape transpose filter listdir reshape File array imread transpose around resize array predict concatenate concatenate conv2d_bn concatenate conv2d_bn concatenate conv2d_bn concatenate conv2d_bn concatenate conv2d_bn concatenate inception_block_2b inception_block_3a inception_block_3b Model inception_block_1a inception_block_1c inception_block_1b Input inception_block_2a get_frame | # OneShotLearning-with-Siamese Aim of this project is to idenitfy similar face charachterstics such as marvel DC_heros,Harry potter,etc | 960 |
SharifAmit/Fundus2Angio | ['fundus to angiography generation'] | ['Fundus2Angio: A Conditional GAN Architecture for Generating Fluorescein Angiography Images from Retinal Fundus Photography'] | train.py convert_npz.py src/performance_visualize.py src/real_fake_data_loader.py src/model.py random_crop.py convert_npz random_crop train fine_generator fundus2angio_gan novel_residual_block coarse_generator discriminator ReflectionPadding2D plot_history visualize_save_weight visualize_save_weight_global load_real_data generate_real_data generate_fake_data_fine resize generate_fake_data_coarse str img_to_array load_img append range fromarray int str choice save range makedirs int train_on_batch visualize_save_weight_global print visualize_save_weight plot_history generate_real_data generate_fake_data_fine resize append range generate_fake_data_coarse len int novel_residual_block pow Model summary Input range compile novel_residual_block Model summary Input range compile min Model summary Input range compile d_model1 d_model2 g_model_fine d_model4 Adam Model g_model_coarse summary Input d_model3 compile subplot print axis close imshow generate_real_data savefig generate_fake_data_fine save resize range generate_fake_data_coarse makedirs subplot print axis close imshow generate_real_data savefig save resize range generate_fake_data_coarse makedirs plot print close savefig legend makedirs load ones randint zeros predict zeros predict array | # ISVC2020 Fundus2Angio [](https://paperswithcode.com/sota/fundus-to-angiography-generation-on-fundus?p=fundus2angio-a-novel-conditional-gan) This code is part of the supplementary materials for the ISVC 2020 for our paper Fundus2Angio: A Conditional GAN Architecture for Generating Fluorescein Angiography Images from Retinal Fundus Photography . The paper has since been accpeted to ISVC 2020 and will be preseneted in October 2020.  ### Arxiv Pre-print ``` https://arxiv.org/abs/2005.05267 ``` # Citation ``` | 961 |
SharifAmit/OCT_Classification | ['retinal oct disease classification'] | ['Optic-Net: A Novel Convolutional Neural Network for Diagnosis of Retinal Diseases from Optical Tomography Images'] | train.py data_preprocess_sri2014.py src/dataloader.py test.py inference.py src/model.py src/metrics.py src/visualize.py src/utils.py sri2014_process inference image_preprocessing print_pred test train Kermany2018 Srinivasan2014 print_metric Weighted_Error OpticNet res_identity res_conv RDBI EncoderDecoder callback_for_training plot_loss_acc join copyfile mkdir listdir len str print around ravel range len reshape imread resize clear_session load_model image_preprocessing predict print_pred clear_session time print_metric load_model print flow_from_directory ImageDataGenerator next predict clear_session Srinivasan2014 time OpticNet collect load_model print fit_generator Kermany2018 callback_for_training save plot_loss_acc ImageDataGenerator flow_from_directory ImageDataGenerator flow_from_directory str print confusion_matrix argmax array range str list items ConfusionMatrix print Weighted_Error TPR TNR Overall_ACC res_identity range str compile Adam res_conv Model RDBI summary Input EncoderDecoder TensorBoard ReduceLROnPlateau ModelCheckpoint show list plot title savefig legend range len | # ICMLA2019 OCT_Classification [](https://paperswithcode.com/sota/retinal-oct-disease-classification-on-oct2017?p=optic-net-a-novel-convolutional-neural) [](https://paperswithcode.com/sota/retinal-oct-disease-classification-on?p=optic-net-a-novel-convolutional-neural) A model for classifying different Retinal Diseases using Deep Learning from Optical Coherence Tomography Images. This code is part of the **supplementary materials for the IEEE ICMLA 2019** for our paper *Optic-net: A Novel Convolutional Neural Network for Diagnosis of Retinal Diseases from Optical Tomography Images*. The paper has since been accpeted to IEEE ICMLA 2019 and will be preseneted in December 2019.  ### IEEE Xplore Digital Library ``` https://ieeexplore.ieee.org/document/8999264 ``` ### Arxiv Pre-print | 962 |
SharifAmit/OpticNet-71 | ['retinal oct disease classification'] | ['Optic-Net: A Novel Convolutional Neural Network for Diagnosis of Retinal Diseases from Optical Tomography Images'] | train.py data_preprocess_sri2014.py src/dataloader.py test.py inference.py src/model.py src/metrics.py src/visualize.py src/utils.py sri2014_process inference image_preprocessing print_pred test train Kermany2018 Srinivasan2014 print_metric Weighted_Error OpticNet res_identity res_conv RDBI EncoderDecoder callback_for_training plot_loss_acc join copyfile mkdir listdir len str print around ravel range len reshape imread resize clear_session load_model image_preprocessing predict print_pred clear_session time print_metric load_model print flow_from_directory ImageDataGenerator next predict clear_session Srinivasan2014 time OpticNet collect load_model print fit_generator Kermany2018 callback_for_training save plot_loss_acc ImageDataGenerator flow_from_directory ImageDataGenerator flow_from_directory str print confusion_matrix argmax array range str list items ConfusionMatrix print Weighted_Error TPR TNR Overall_ACC res_identity range str compile Adam res_conv Model RDBI summary Input EncoderDecoder TensorBoard ReduceLROnPlateau ModelCheckpoint show list plot title savefig legend range len | # ICMLA2019 OCT_Classification [](https://paperswithcode.com/sota/retinal-oct-disease-classification-on-oct2017?p=optic-net-a-novel-convolutional-neural) [](https://paperswithcode.com/sota/retinal-oct-disease-classification-on?p=optic-net-a-novel-convolutional-neural) A model for classifying different Retinal Diseases using Deep Learning from Optical Coherence Tomography Images. This code is part of the **supplementary materials for the IEEE ICMLA 2019** for our paper *Optic-net: A Novel Convolutional Neural Network for Diagnosis of Retinal Diseases from Optical Tomography Images*. The paper has since been accpeted to IEEE ICMLA 2019 and will be preseneted in December 2019.  ### IEEE Xplore Digital Library ``` https://ieeexplore.ieee.org/document/8999264 ``` ### Arxiv Pre-print | 963 |
SharifAmit/Robust_Joint_Attention | ['retinal oct disease classification'] | ['Improving Robustness using Joint Attention Network For Detecting Retinal Degeneration From Optical Coherence Tomography Images'] | train.py data_preprocess_sri2014.py src/dataloader.py src/visualize.py src/model.py src/utils.py sri2014_process train multiple_outputs Srinivasan_2014 OpticNet res_identity resnet50 mobilenetv2 res_conv RDBI EncoderDecoder callback_for_training plot_loss_acc join copyfile mkdir listdir len clear_session time OpticNet collect Srinivasan_2014 print resnet50 fit_generator mobilenetv2 callback_for_training save plot_loss_acc next flow_from_directory ImageDataGenerator multiple_outputs res_identity range str compile Adam res_conv Model RDBI summary Input EncoderDecoder Adam output Model ResNet50 summary compile Adam output Model summary compile MobileNetV2 TensorBoard ReduceLROnPlateau ModelCheckpoint show list plot title savefig legend range len | # ICIP2020 Joint Robust Attention Network [](https://paperswithcode.com/sota/retinal-oct-disease-classification-on?p=improving-robustness-using-joint-attention) [](https://paperswithcode.com/sota/retinal-oct-disease-classification-on-oct2017?p=improving-robustness-using-joint-attention) This code is part of the supplementary materials for the IEEE ICIP 2020 for our paper Improving robustness using Joint Attention network for Optical Coherence Tomography Images . The paper has since been accpeted to IEEE ICIP 2020 and will be preseneted in October 2020.  ### Arxiv Pre-print ``` https://arxiv.org/abs/2005.08094 ``` ### IEEE Xplore Digital Library | 964 |
Sharpenb/python_paris | ['graph clustering'] | ['Hierarchical Graph Clustering using Node Pair Sampling'] | python_paris/tests/test_homogeneous_cut_slicer.py python_paris/homogeneous_cut_slicer.py python_paris/tests/test_distance_slicer.py python_paris/__init__.py python_paris/tests/test_cluster_slicer.py python_paris/tests/test_paris.py examples/simple_graph.py python_paris/tests/test_heterogeneous_cut_slicer.py python_paris/cluster_cut_slicer.py setup.py python_paris/distance_slicer.py python_paris/heterogeneous_cut_slicer.py python_paris/paris.py readme ClusterTree clustering_from_cluster_cut ranking_cluster_cuts best_cluster_cut ClusterTree ranking_distances best_distance clustering_from_distance ClusterTree clustering_from_heterogeneous_cut ranking_heterogeneous_cuts best_heterogeneous_cut ClusterTree clustering_from_homogeneous_cut ranking_homogeneous_cuts best_homogeneous_cut paris reorder_dendrogram TestClusterSlicer TestDistanceSlicer TestHeterogeneoousCutSlicer TestHomogeneoousCutSlicer TestParis pop int range pop int cluster_label scoring score size distance ClusterTree range pop int sorted list cluster_label scoring score size distance ClusterTree append range values pop int range int scoring score size mean distance ClusterTree old_score range int sorted list scoring concatenate score size argsort distance ClusterTree old_score array range append int pop range pop int scoring size best_cut set distance ClusterTree best_score range set best_heterogeneous_cut append union range pop int range int scoring score size distance ClusterTree old_score range int sorted list scoring score size distance ClusterTree old_score range values pop add_edge list convert_node_labels_to_integers neighbors min nodes copy remove_node has_edge edges append float add_node len update zeros lexsort range | python-paris: Hierarchical graph clustering algorithm (paris) and dendrogram processing ========================= The paris package is a Python module that provides an implementation of the hierarchical clustering algorithm for graphs, paris, from the paper: [Hierarchical Graph Clustering using Node Pair Sampling](https://www.mlgworkshop.org/2018/papers/MLG2018_paper_4.pdf)<br> Thomas Bonald , Bertrand Charpentier, Alexis Galland, Alexandre Hollocou<br> Mining and Learning with Graphs (MLG - KDD Workshop), 2018 Additonally, it provides four algorithms able to process dendrograms in order to extract best clusters, clusterings or distances. These algorithms are described in the paper: [Multi-scale Clustering in Graphs using Modularity](http://www.diva-portal.org/smash/get/diva2:1292782/FULLTEXT01.pdf)<br> Bertrand Charpentier<br> KTH Publicaiton Library (DiVA) 2019 | 965 |
ShawnWilliams/style_transfer | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | utils.py stye_transfer.py vgg_model.py _single_style_loss _create_summary _create_content_loss _gram_matrix _create_losses _create_style_loss main train generate_noise_image make_dir get_resized_image download save_image _weights _avgpool _conv2d_relu load_vgg reshape _gram_matrix len generate_noise_image _create_summary minimize Variable train make_dir get_resized_image _create_losses load_vgg download print urlretrieve exists stat join asarray ANTIALIAS float32 save split open fit astype float32 astype imsave mkdir _conv2d_relu _avgpool loadmat | # style_transfer 该项目复现了Leon A. Gatys的论文《A Neural Algorithm of Artistic Style》中的工作。 论文链接: https://arxiv.org/pdf/1508.06576v2.pdf 运行程序: 在style_transfer.py文件中指定STYLE为输入的风格图片,指定CONTENT为输入的内容图片,CONTENT图片的IMAGE_HEIGHT与IMAGE_WIDTH, 以及输出图像的路径OUT_DIR与训练的中间文件保存路径CHECKPOINT_DIR,然后run style_transfer.py的main函数,即可在OUT_DIR文件夹下得到与CONTENT图片大小一致的输出图片。 | 966 |
Shen-Lab/BAL | ['active learning'] | ['Bayesian active learning for optimization and uncertainty quantification in protein docking'] | src/Configuration.py src/pre_bal.py src/scoring_rmsd.py dependencies/eigen_library/debug/gdb/__init__.py dependencies/eigen_library/scripts/relicense.py dependencies/eigen_library/debug/gdb/printers.py src/cNMA_output/Rec/Output/Configuration.py EigenQuaternionPrinter lookup_function register_eigen_printers build_eigen_dictionary EigenMatrixPrinter update Configurations encoder decoder Configurations append strip_typedefs search tag target type encoder_individual chr exists ord decoder_individual | # BAL Bayesian Active Learning for Optimization and Uncertainty Quantification with Applications in Protein Docking https://pubs.acs.org/doi/abs/10.1021/acs.jctc.0c00476 # Versions in branch: * master: The standard BAL * noclashv1: The version whose output structure will gurantee \delta vdw<0. ## Dependencies: * C++ 4.8.5 or higher * cNMA: Download and install the cNMA from "https://github.com/Shen-Lab/cNMA". * Energy model: Please download the random forest energy model from | 967 |
ShengdingHu/GraphPolicyNetworkActiveLearning | ['active learning'] | ['Graph Policy Network for Transferable Active Learning on Graphs'] | src/baselines/sampling_methods/informative_diverse.py src/baselines/sampling_methods/kcenter_greedy.py src/utils/dataloader.py src/baselines/sampling_methods/utils/tree_test.py src/baselines/coreset.py src/baselines/active-learning/sampling_methods/uniform_sampling.py src/baselines/active-learning/sampling_methods/constants.py src/utils/rewardshaper.py src/baselines/active-learning/sampling_methods/represent_cluster_centers.py src/utils/env.py src/baselines/active-learning/sampling_methods/utils/tree_test.py src/baselines/active-learning/utils/chart_data.py src/train.py src/baselines/sampling_methods/represent_cluster_centers.py src/utils/player.py src/baselines/active-learning/sampling_methods/sampling_def.py src/baselines/active-learning/sampling_methods/simulate_batch.py src/baselines/active-learning/utils/kernel_block_solver.py src/baselines/sampling_methods/sampling_def.py src/baselines/active-learning/sampling_methods/kcenter_greedy.py src/baselines/anrmab.py src/baselines/sampling_methods/hierarchical_clustering_AL.py src/utils/utils.py src/baselines/age.py src/test.py src/baselines/sampling_methods/utils/tree.py src/baselines/sampling_methods/margin_AL.py src/baselines/active-learning/utils/__init__.py src/baselines/active-learning/sampling_methods/utils/__init__.py src/datasetcollecting/biggraph.py src/baselines/sampling_methods/wrapper_sampler_def.py src/baselines/sampling_methods/mixture_of_samplers.py src/baselines/sampling_methods/constants.py src/baselines/sampling_methods/utils/__init__.py src/utils/query.py src/baselines/active-learning/sampling_methods/graph_density.py src/utils/const.py src/baselines/active-learning/sampling_methods/margin_AL.py src/baselines/active-learning/run_experiment.py src/baselines/sampling_methods/graph_density.py src/baselines/sampling_methods/simulate_batch.py src/baselines/active-learning/sampling_methods/wrapper_sampler_def.py src/datasetcollecting/loadreddit.py src/baselines/active-learning/sampling_methods/bandit_discrete.py src/utils/classificationnet.py src/baselines/active-learning/utils/create_data.py src/baselines/active-learning/__init__.py src/baselines/active-learning/utils/small_cnn.py src/utils/common.py src/baselines/coreset/full_solver_gurobi.py src/datasetcollecting/mergeedgelist.py src/baselines/active-learning/sampling_methods/__init__.py src/baselines/active-learning/sampling_methods/utils/tree.py src/baselines/sampling_methods/__init__.py src/baselines/active-learning/sampling_methods/hierarchical_clustering_AL.py src/baselines/active-learning/utils/utils.py src/baselines/active-learning/sampling_methods/mixture_of_samplers.py src/baselines/coreset/gurobi_solution_parser.py src/utils/policynet.py src/baselines/active-learning/sampling_methods/informative_diverse.py src/baselines/sampling_methods/bandit_discrete.py src/baselines/sampling_methods/uniform_sampling.py src/datasetcollecting/collecting.py src/baselines/coreset/compute_distance_mat.py src/baselines/active-learning/utils/allconv.py policyQuery centralityQuery multipleRun entropyQuery coresetQuery randomQuery anrmabQuery parse_args ageQuery parse_args SingleTrain AGEQuery EntropyQuery EdgeQuery multiclassentropy_numpy centralissimo CentralityQuery ClusterSampler ProbSampler AnrmabQuery unitTest BanditDiscreteSampler centralissimo DegSampler unitTest CoreSetQuery main generate_one_curve BanditDiscreteSampler get_mixture_of_samplers get_all_possible_arms get_AL_sampler get_base_AL_mapping get_wrapper_AL_mapping GraphDensitySampler HierarchicalClusterAL InformativeClusterDiverseSampler kCenterGreedy MarginAL MixtureOfSamplers RepresentativeClusterMeanSampling SamplingMethod SimulateBatchSampler UniformSampling WrapperSamplingMethod Node Tree TreeTest AllConv get_normalize get_scoring_method get_sampling_method plot_results combine_results get_between main get_standardize get_cifar10 get_csv_data get_mldata get_wikipedia_talk_data get_keras_data main Dataset BlockKernelSolver SmallCNN filter_data calculate_entropy get_mldata get_train_val_test_splits flip_label create_checker_unbalanced Logger get_model flatten_X get_class_counts get_mixture_of_samplers get_all_possible_arms get_AL_sampler get_base_AL_mapping get_wrapper_AL_mapping GraphDensitySampler HierarchicalClusterAL InformativeClusterDiverseSampler kCenterGreedy MarginAL MixtureOfSamplers RepresentativeClusterMeanSampling SamplingMethod SimulateBatchSampler UniformSampling WrapperSamplingMethod Node Tree TreeTest selectclass loadPPI count_conference process readAminer readdata parse_args split merge niceprint logargs logdicts ConfigRootLogger GraphLoader prob2Logprob Env localdiversity logprob2Prob degprocess perc normalizeEntropy parse_args Player selectActions RandomQuery unitTestProbQuery choose ProbQuery RewardShaper inspect_weight preprocess_features normalize_adj entropy sparse_mx_to_torch_sparse_tensor column_normalize AverageMeter accuracy mean_std preprocess_adj inspect_grad common_rate add_argument ArgumentParser validation getPool G RandomQuery remain_epoch query trainOnce range q current_budget AGEQuery validation format getPool G debug q remain_epoch query trainOnce numpy range age_basef current_budget validation format getPool G EntropyQuery debug q remain_epoch query trainOnce numpy range age_basef current_budget validation format getPool G debug q current_budget remain_epoch query trainOnce numpy range age_basef CentralityQuery getPool batchsize where query trainOnce CoreSetQuery tolist remain_epoch append range q current_budget format debug validation testmask G reshape trainmask numpy getPool batchsize nval query trainOnce argmax ntest remain_epoch AnrmabQuery range q current_budget format debug astype float validation G numpy getPool query trainOnce cuda Env remain_epoch load_state_dict range q normadj current_budget format debug modelname policy ProbQuery load validation G getState zeros numpy list print reset zip append range str logargs gpu list values astype append zeros range pagerank len mean randn print CoreSetQuery range q get_train_val_test_splits score max seed str len sampler ceil normalize append range unique int print min extend select_batch transform fit batch_size MkDir trials Logger dataset save_dir flush_file seed str sampling_method get_mldata GFile data_dir strftime train_horizon range dump generate_one_curve Glob warmstart_size gmtime select_method score_method join get_AL_sampler get_model to_dict partial print ABCMeta ABCMeta load pop list isinstance FastGFile tuple vstack append sorted plot keys dict mean sqrt legend zip fill_between std range len rfind len find use print len PdfPages combine_results plot_results standardize title savefig source_dir normalize close split replace GFile Dataset strip append array split read_table Dataset apply TfidfTransformer download_file array CountVectorizer fit_transform concatenate reshape transpose flatten load_data Dataset load read BytesIO seek concatenate Dataset write close download_file getnames array StringIO open data fetch_rcv1 load_iris MkDir target save_dir GFile fetch_mldata transpose fetch_20newsgroups_vectorized fit_transform dump get_wikipedia_talk_data get_keras_data TfidfTransformer get_cifar10 join get_csv_data load_breast_cancer datasets int T concatenate ones uniform vstack zeros range shape reshape load flatten create_checker_unbalanced append sort unique shuffle copy delete unique append array range GridSearchCV AllConv SmallCNN SVC LinearSVC LogisticRegression BlockKernelSolver int entropy len ceil range append get_class_counts seed int arange min shuffle copy flip_label len update list format print size tolist set add intersection sum range len data selectclass y format append print size tolist PPI numpy range cat len join readline format int sorted items print strip close OrderedDict split append open print add len set items print count_conference append len add_edges_from print Graph subgraph tolist Reddit max format print len extend mkdir mergeversion split mkdir append enumerate len stdout setFormatter format getLogger WARNING addHandler StreamHandler Formatter DEBUG setLevel INFO FileHandler format niceprint info append vars enumerate join append max range len items str format niceprint info append enumerate sigmoid softmax float log log argsort float size FloatTensor transpose indices float log values zeros_like randn print where softmax ProbQuery q reshape batchsize softmax Categorical info sample log_prob detach flatten dot sum array diag mean diags flatten coo_matrix sum array normalize_adj eye data Size astype float32 from_numpy shape int64 numpy type_as tolist sqrt float sum len mean sum list format info zip print format reshape sum | # Graph Policy Network for Transferable Active Learning on Graphs
This is the code of the paper **G**raph **P**olicy network for transferable **A**ctive learning on graphs (GPA).
## Dependencies
matplotlib==2.2.3
networkx==2.4
scikit-learn==0.21.2
numpy==1.16.3
scipy==1.2.1
torch==1.3.1
| 968 |
Shiaoming/vid2depth | ['depth and camera motion', 'depth estimation'] | ['Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints'] | nets.py train.py dataset/dataset_loader.py ops/icp_grad_test.py ops/icp_util.py ops/__init__.py util.py ops/icp_op.py project.py ops/icp_grad.py inference.py dataset/gen_data.py ops/icp_train_demo.py reader.py dataset/__init__.py ops/icp_test.py model.py main _gray2rgb _normalize_depth_for_display _run_inference Model _resize_like disp_net egomotion_net _spatial_transformer _pixel2cam _bilinear_sampler _cam2pixel get_cloud _meshgrid_abs inverse_warp _euler2mat _egomotion_vec2mat DataReader main train format_number is_a_numpy_array get_seq_middle count_parameters get_vars_to_restore read_text_lines info natural_keys KittiOdom atoi KittiRaw Bike get_resource_path Cityscapes get_seq_start_end _stack_image_seq _generate_data _gen_example main _gen_example_star _icp_grad IcpOpGradTest IcpOpTest IcpOpTestBase fill_feed_dict run_training loss_func training placeholder_inputs DataProducer main inference transform_cloud_xyz np_get_transformation_matrix np_transform_cloud_xyz get_transformation_matrix batch_transform_cloud_xyz join basename replace get_vars_to_restore Model Saver dirname output_dir MakeDirs Supervisor model_ckpt cmap astype float32 delete get_cmap int _gray2rgb clip _run_inference value value _spatial_transformer constant _pixel2cam ones reshape concat _cam2pixel transpose matmul shape _meshgrid_abs tile expand_dims _egomotion_vec2mat matmul slice concat matmul ones_like ones reshape transpose concat float32 matmul cast linspace expand_dims ones concat cos pi clip_by_value sin zeros expand_dims constant slice squeeze concat tile expand_dims _euler2mat _bilinear_sampler cast float32 seed checkpoint_dir pretrained_ckpt summary_freq set_random_seed Model MakeDirs train train_steps Supervisor ConfigProto Saver get_vars_to_restore int is_a_numpy_array isinstance format_number get_shape name get_vars_to_restore num_elements info trainable_variables sorted basename name extend warn list_variables setlocale LC_ALL int seed join KittiOdom list array_split num_train KittiRaw Bike data_dir cpu_count close Cityscapes dict Manager MakeDirs dataset_dir Pool range pop join uint8 _stack_image_seq data_dir astype get_example_with_index MakeDirs imsave hstack enumerate _generate_data float32 placeholder next_batch batch_size norm icp batch_size Variable concat maximum stack fill zeros scalar GradientDescentOptimizer Variable scalar minimize run_training constant concat cos pi matmul stack clip_by_value sin concatenate cos pi dot stack sin array clip ones concat transpose matmul get_transformation_matrix np_get_transformation_matrix ones concatenate transpose dot transform_cloud_xyz unstack zip append len | # vid2depth **Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints** Reza Mahjourian, Martin Wicke, Anelia Angelova CVPR 2018 Project website: [https://sites.google.com/view/vid2depth](https://sites.google.com/view/vid2depth) ArXiv: [https://arxiv.org/pdf/1802.05522.pdf](https://arxiv.org/pdf/1802.05522.pdf) <p align="center"> <a href="https://sites.google.com/view/vid2depth"><img src='https://storage.googleapis.com/vid2depth/media/sample_video_small.gif'></a> </p> <p align="center"> | 969 |
Shilpil/style-transfer | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | train.py parse_arg.py vgg_net.py load_process_img.py loss.py load_image preprocess_image deprocess_image gram_matrix tv_loss content_loss style_loss build_parser train extract_features imread resize to_float reshape transpose shape tensordot Variable reduce_sum gram_matrix enumerate squared_difference slice shape reduce_sum squared_difference add_argument ArgumentParser beta1 style_weights max_iter h5_file append load_image initial_lr extract_features beta2 content_weight img_size content preprocess_image tv_weight print_iterations print style Variable output is_gpu_available gram_matrix epsilon convert_to_tensor File | # style-transfer Simple Neural Style transfer using pretrained VGG19 weights This is a simple implementation of style transfer based on the paper : A Neural Algorithm of Artistic Style (https://arxiv.org/abs/1508.06576) It is based on neural networks and uses VGG 19 The implementation is as follows: 1. Load and preprocess the content and style image 2. Initialise the final image with the content image 3. Extract the features of all the 3 images (Content, Style and new generated image) 4. Calculate the loss which is a sum of Content loss + Style loss + Total variation loss. 5. Use Adam optimizer to reduce the loss and modify the new generated image. | 970 |
ShipengWang/HyperAdam-Tensorflow | ['stochastic optimization'] | ['HyperAdam: A Learnable Task-Adaptive Adam for Network Training'] | adamLSTM/test_list.py adamLSTM/optimizee/stochasticlinear.py adamLSTM/train.py adamLSTM/optimizee/vgg.py adamLSTM/optimizee/lstm.py adamLSTM/nn_opt/__init__.py adamLSTM/nn_opt/deepmind.py adamLSTM/optimizee/mnist.py adamLSTM/task_list.py adamLSTM/main.py adamLSTM/nn_opt/rnn.py adamLSTM/nn_opt/MetaLSTMCell.py adamLSTM/nn_opt/metalstm.py adamLSTM/optimizee/trivial.py adamLSTM/optimizee/__init__.py adamLSTM/test.py adamLSTM/optimizee/tricks.py main optimizer_train_optimizee main train_optimizer LSTMOptModel AdamLstmOptModel AdamLstmAdaWeightOptModelNeither AdamLstmAdaWeightOptModel AdamLstmOptModelH AdamLstmAdaWeightOptModelLSTM MetaLSTMOptModel AdamLstmAdaWeightOptModelPre AdamLSTMCell _lrelu_eps batch_normalization _batch_normalization MyMultiRNNCell MyLSTMCell RNNpropModel lstm_func SinLSTMModel main forward MnistLinearModel main StochLinear ExponentiallyPointwiseRandomScaling Mixed Square Optimizee join restore n_tests prepare_train_optimizee sorted dump name print close test gd open eid append range log makedirs str Graph tasks GPUOptions gpu prepare_train_optimizee arange train_one_iteration save log open run str restore name getcwd ylabel build savefig legend append range dump plot close test mean float flush load join n_tests get_default_session print xlabel pause n_batches write draw save_best eid figure FLAGS global_variables_initializer makedirs relu sigmoid reshape tanh matmul update MnistLinearModel print feed_dict_test placeholder build_test test_optimizee get_initial_x next_internal_feed_dict Session run arange StochLinear clip_by_value Session run show getcwd build apply_gradients savefig append range update plot compute_gradients next_internal_feed_dict zeros join Variable next_feed_dict print AmsGradOptimizer figure global_variables_initializer loss | HyperAdam: A Learnable Task-Adaptive Adam for Network Training = Introduction - The current project page provides tensorflow code that implements our AAAI2019 paper: **Title**: "HyperAdam: A Learnable Task-Adaptive Adam for Network Training" **Authors**: Shipeng Wang, Jian Sun*, Zongben Xu **Email:** [email protected] **Institution**: School of Mathematics and Statistics, Xi'an Jiaotong University **Code**: https://github.com/ShipengWang/HyperAdam-Tensorflow, https://github.com/ShipengWang/Variational-HyperAdam (PyTorch) | 971 |
ShivaKrishnaM/ZS-SBIR | ['sketch based image retrieval', 'image retrieval'] | ['A Zero-Shot Framework for Sketch-based Image Retrieval'] | prestep_image.py prestep_ext.py trainCVAE_pre.py prestep_sketch.py get_session get_session get_session get GPUOptions | # Zero-shot-SBIR Github Reporsitory for ECCV-2018 paper titled "A Zero-Shot Framework for Sketch Based Image Retrieval" Please download the data from this google drive link: https://drive.google.com/file/d/16T0eXujQiwXK0Y7tmu3UzW63IFbuea-R/view?ts=5bb5be13 Extract the files into the same folder as the code. Run "trainCVAE_pre.py" to train and test the model. If you use our paper or code please cite the following: ``` @InProceedings{Yelamarthi_2018_ECCV, author = {Kiran Yelamarthi, Sasi and Krishna Reddy, Shiva and Mishra, Ashish and Mittal, Anurag}, title = {A Zero-Shot Framework for Sketch based Image Retrieval}, | 972 |
ShuaiChenBIGR/MASSL-segmentation-framework | ['medical image segmentation', 'semantic segmentation'] | ['Multi-Task Attention-Based Semi-Supervised Learning for Medical Image Segmentation'] | network/ssl_3d_attention.py module/dice_loss.py Sequence_BraTS18_epoch_attention.py module/visualize.py network/ssl_3d_sep.py module/evaluation_lesion.py module/eval_attention_BraTS_slidingwindow.py BraTS18_preprocess.py Network_training_SSL_epoch_attention_BraTS18.py module/eval_BraTS_slidingwindow.py module/transform.py dataloader/BraTS18_dataloader.py Network_training_SSL_epoch_BraTS18.py Sequence_BraTS18_epoch.py module/evaluation_voxel.py module/visualize_attention.py module/common_module.py train_model network_training_ssl_epoch train_model network_training_ssl_epoch BraTS18data unique_rand history_log getmillisecond mkdir DiceCoefficientLF DiceCoefficientLF_rec dice_coeff MSELF DiceCoeff getDSC getAVD getLesionDetection do getImages getHausdorff getDSC getAVD getLesionDetection do getImages getHausdorff eval_net_dice test_net_dice eval_net_mse eval_net_dice test_net_dice eval_net_mse Elastic Crop ToTensor Resample Resize RandomCrop RandomCropT Flip visualize_loss visualize visualize_Seg visualize_loss visualize_Rec MASSL_norm MASSL_norm_feature_bank MASSL_norm_feature_bank_one_neuron weights_init MSSL_norm_double_encoder_capacity_unet MSSL_norm_bias weights_init MSSL_norm_large_AE semiSupervised3D_sep MSSL_norm_double_decoder_capacity_unet MSSL_norm eval_net_dice history_log zero_grad set_description save defaultdict apply savefig load_state_dict weights_init append to state_dict format visualize_loss close eval zip trange float eval_net_mse enumerate deepcopy time int print figure train step array split DiceCoefficientLF str train_model format load print history_log test_net_dice MSELoss mkdir load_state_dict to type save state_dict seed sorted list glob BraTSDataset shuffle map device zip randint rstrip strip exists makedirs append randint close write open zero_ zip forward is_cuda enumerate getDSC getAVD getLesionDetection Mask CopyInformation ReadImage BinaryThreshold flatten astype apply_along_axis StatisticsImageFilter BinaryErode GetArrayFromImage Subtract TransformIndexToPhysicalPoint getDistancesFromAtoB Execute float len ConnectedComponentImageFilter Multiply GetArrayFromImage unique SetFullyConnected Cast sitkUInt32 Execute GetSum float StatisticsImageFilter Execute sum visualize_Seg close eval savefig mkdir float numpy cuda enumerate close visualize_Rec eval savefig mkdir float numpy cuda net enumerate visualize visualize subplot format arange cmap suptitle ListedColormap set_title axis tight_layout viridis imshow figure linspace N subplot format set_title suptitle plot set_xlabel grid tight_layout ylim set_ylabel twinx legend subplot format arange cmap suptitle ListedColormap set_title axis tight_layout viridis imshow figure linspace N subplot format arange cmap suptitle ListedColormap set_title axis tight_layout viridis imshow figure linspace N data Conv3d isinstance xavier_uniform | # MASSL-segmentation-framework Multi-task Attention-based Semi-supervised Learning framework for image segmentation based on the paper published at MICCAI 2019 (https://arxiv.org/abs/1907.12303) by Shuai Chen, et al. <img src="MASSL_MRI.png" width="800"/> For questions please contact me through github or email directly. ## Requirements 1. python 3 2. matplotlib 3. numpy 4. SimpleITK 5. sklearn | 973 |
ShuaiyiHuang/DCCNet | ['semantic correspondence'] | ['Dynamic Context Correspondence Network for Semantic Alignment'] | train_dccnet.py lib/transformation.py eval_pf_willow.py geotnf/point_tnf.py models/spatial_context_encoder.py lib/pf_willow_dataset.py lib/eval_util_dynamic.py models/sce_efficient.py eval_pf_pascal.py geotnf/flow.py eval_tss.py lib/conv4d.py lib/im_pair_dataset.py models/model_dynamic.py lib/dataloader.py lib/normalization.py datasets/download_datasets.py models/dynamic_fusion_att.py geotnf/transformation.py models/loss_dynamic.py lib/pf_dataset.py lib/modules.py lib/point_tnf_dynamic.py lib/tss_dataset.py lib/torch_util.py lib/plot.py lib/py_util.py process_epoch download_TSS download_PF_willow download_and_uncompress th_sampling_grid_to_np_flow write_flo_file read_flo_file warp_image np_flow_to_th_sampling_grid unnormalize_axis PointTnf PointsToUnitCoords normalize_axis PointsToPixelCoords SynthPairTnf ComposedGeometricTnf SynthTwoStageTnf GeometricTnf AffineGridGen TpsGridGen SynthTwoStageTwoPairTnf SynthTwoPairTnf AffineGridGenV2 Conv4d conv4d DataLoaderIter default_collate DataLoader _worker_loop ExceptionWrapper pin_memory_batch _pin_memory_loop pck_metric flow_metrics pck pfpascal_test_dataloader pfdataset_pck pfpascal_val_dataloader ImagePairDataset FeatureExtraction featureL2Norm MutualMatching NeighConsensus maxpool4d FeatureCorrelation normalize_image NormalizeImageDict PFPascalDataset PFPascalDataset PFDataset plot_image plot_loss plot_image_debug save_plot unnormalize_axis PointsToUnitCoords mergeA_mergeB_from_out corr_to_matches bilinearInterpPointTnf nearestNeighPointTnf normalize_axis PointsToPixelCoords create_file_path str_to_bool save_checkpoint BatchTensorToVars collate_custom Softmax1D expand_dim AffineTnf AffineGridGen TSSDataset DynamicFusionNet score_for_single_corr4d weak_loss_singlebatch weak_loss DCCNet global_spatial_representation_efficient featureL2Norm SpatialContextEncoderEfficient SpatialContextEncoder featureL2Norm generate_spatial_descriptor time format backward print step zero_grad capitalize loss_fn batch_preprocessing_fn enumerate len endswith print extractall write close dirname open ZipFile exists makedirs print join basename download_and_uncompress print join basename download_and_uncompress print close float32 int32 resize fromfile open tofile astype float32 close array open uint8 grid_sample Variable astype unsqueeze np_flow_to_th_sampling_grid concatenate Variable unsqueeze meshgrid cuda range normalize_axis unnormalize_axis concatenate squeeze meshgrid numpy range clone normalize_axis expand_as unnormalize_axis clone expand_as Variable size contiguous conv3d half get_device HalfTensor zeros range cuda is_cuda cat seed get set_num_threads put collate_fn get pin_memory_batch isinstance put is_tensor sum isinstance Sequence new zip _new_shared Mapping Mapping is_tensor isinstance Sequence ne float size mean pow expand_as zeros le sum range data PointsToUnitCoords size pck bilinearInterpPointTnf numpy range PointsToPixelCoords DataLoader Dataset flatnonzero str format model pck_metric print size mean eval BatchTensorToVars batch_tnf corr_to_matches is_available zeros enumerate len DataLoader Dataset int th_sampling_grid_to_np_flow join view pointsToGrid Variable create_file_path size write_flo_file unsqueeze linspace cuda bilinearInterpPointTnf meshgrid numpy flow_output_dir range cat expand_as size max view tuple div fmod unsqueeze append max range cat isinstance Variable size add expand div unsqueeze cuda is_cuda show uint8 view Variable astype add imshow cuda is_cuda show uint8 view concatenate Variable print astype add shape stack imshow cuda is_cuda set_major_locator NullLocator set_axis_off margins subplots_adjust savefig join arange plot title savefig figure legend len view score_for_single_corr4d shape stack append sum range mergeA_mergeB_from_out view size softmax linspace expand_as meshgrid max range view min pow sqrt unsqueeze cat int view isinstance Variable toidx size multrows abs sqrt unsqueeze long topoint sum cuda is_cuda dirname makedirs is_tensor Mapping isinstance exp unsqueeze join str basename copyfile dirname save makedirs size list size sum weak_loss_singlebatch cat view model contiguous score_for_single_corr4d mean shape stack append sum max range view size mean permute normalize max int _quadruple arange Variable shape pad get_device zeros sum cuda is_cuda int bmm _quadruple view Variable contiguous transpose shape pad unsqueeze get_device zeros range cuda is_cuda | # DCCNet-Pytorch  This is the implementation of the paper: S. Huang, Q. Wang, S. Zhang, S. Yan, and X. He. Dynamic Context Correspondence Network for Semantic Alignment. ICCV 2019 [[arXiv](https://arxiv.org/abs/1909.03444)] using PyTorch. ## Getting started ### Environment Python 3.5.2 Pytorch 0.3.1 torchvision 0.2.1 | 974 |
ShuangLI59/person_search | ['person search', 'person re identification', 'pedestrian detection'] | ['Joint Detection and Identification Feature Learning for Person Search'] | tools/eval_test.py lib/rpn/generate_anchors.py lib/datasets/imdb.py lib/fast_rcnn/test_gallery.py lib/setup.py lib/fast_rcnn/test_probe.py lib/transform/torch_image_transform_layer.py lib/datasets/__init__.py lib/fast_rcnn/bbox_transform.py lib/datasets/factory.py tools/demo.py lib/fast_rcnn/test_utils.py lib/rpn/__init__.py lib/fast_rcnn/train.py tools/eval_utils.py lib/utils/__init__.py lib/rpn/generate.py lib/nms/py_cpu_nms.py lib/roi_data_layer/roidb.py lib/rpn/proposal_layer.py lib/roi_data_layer/__init__.py lib/rpn/anchor_target_layer.py lib/utils/timer.py tools/train_net.py lib/fast_rcnn/__init__.py lib/fast_rcnn/nms_wrapper.py lib/datasets/ds_utils.py lib/datasets/psdb.py tools/_init_paths.py lib/rpn/proposal_target_layer.py lib/utils/blob.py lib/roi_data_layer/layer.py lib/roi_data_layer/minibatch.py lib/fast_rcnn/config.py find_in_path customize_compiler_for_nvcc custom_build_ext locate_cuda unique_boxes xywh_to_xyxy validate_boxes xyxy_to_xywh filter_small_boxes get_imdb list_imdbs imdb clip_boxes bbox_transform bbox_transform_inv cfg_from_file cfg_from_list get_output_dir _merge_a_into_b nms get_gt_boxes_blob get_rois_blob _project_im_rois get_image_blob py_cpu_nms TorchImageTransformLayer im_list_to_blob prep_im_for_blob Timer pickle unpickle main mpi_collect mpi_dispatch add_path pathsep pjoin exists split find_in_path iteritems pjoin pathsep dirname sep append _compile compiler_so dot array unique transpose log dtype exp astype shape zeros minimum maximum join EXP_DIR abspath ROOT_DIR makedirs iteritems ndarray isinstance type array _merge_a_into_b literal_eval zip split MAX_SIZE min astype float32 SCALES shape resize append im_list_to_blob float max _project_im_rois hstack hstack zeros float abs astype append maximum minimum transpose xrange zeros max len min astype float32 shape resize float max probe_def mpi_collect _data_path usegt_and_exfeat use_gt name GPU_ID cfg_from_file get_imdb set_device map len _load detect_and_exfeat pprint exfeat eval_only sleep set_cfgs TEST image_index cfg_file cfg_from_list format evaluate_detections mpi_init _root_dir Net imdb_name evaluate_search join print gallery_def set_mode_gpu caffemodel mpi_dispatch probes pickle get_output_dir split iteritems list isinstance from_iterable gather insert | # Person Search Project This repository hosts the code for our paper [Joint Detection and Identification Feature Learning for Person Search](https://arxiv.org/abs/1604.01850). The code is modified from the py-faster-rcnn written by Ross Girshick. **Request the dataset from lishuang[at]mit.edu or tong.xiao.work[at]gmail.com (academic only).** </br> Due to licensing issues, please send us your request using your university email. ## Installation 1. Clone this repo **recursively** ```Shell git clone --recursive https://github.com/ShuangLI59/person_search.git ``` 2. Build Caffe with python layers and interface | 975 |
Sid2697/Word-recognition-and-retrieval | ['optical character recognition'] | ['Fused Text Recogniser and Deep Embeddings Improve Word Recognition and Retrieval'] | merge_embeds.py word_recognition.py word_retrieval.py l2Normalize get_features get_unique_words_index_list get_annotations get_occurance_list print div unsqueeze expand_as list list master_dict tqdm get_annotations append keys zeros enumerate len | Fused Text Recogniser and Deep Embeddings Improve Word Recognition and Retrieval ================================================================================= [](http://arxiv.org/abs/2007.00166) [](LICENSE) ### [Project page](https://sid2697.github.io/Word-recognition-and-retrieval/) | [Paper](https://arxiv.org/pdf/2007.00166.pdf) | [Demonstration](https://sid2697.github.io/files/Word_Retrieval_demo.gif) | [Poster](https://sid2697.github.io/files/Siddhant_Bansal_V4.pdf) | [Springer](https://link.springer.com/chapter/10.1007/978-3-030-57058-3_22) This repository contains code for the paper "**Fused Text Recogniser and Deep Embeddings Improve Word Recognition and Retrieval**" *[Siddhant Bansal](https://sid2697.github.io), [Praveen Krishnan](https://kris314.github.io), [C.V. Jawahar](https://faculty.iiit.ac.in/~jawahar/index.html)* published in DAS 2020. ## Word Recognition Results <!-- ----------- --> | 976 |
SiddhanthHegde/Custom-Neural-Style-Transfer | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | utils.py train.py mobNet load_image style_loss_calc content_loss_calc unsqueeze open t mm | # Custom-Neural-Style-Transfer Customized way of transferring style from a style image using mobilenet v2 rather than using vgg19 (as used in the original paper https://arxiv.org/pdf/1508.06576.pdf). Conventional optimizer Adam was used instead of L-BFGS and other small changes were made. Implementation in pytorch. <img width = "2406" src = "https://github.com/SiddhanthHegde/Custom-Neural-Style-Transfer/blob/main/NST%20preview%20img.jpg"> | 977 |
SigCGANs/Conditional-Sig-Wasserstein-GANs | ['time series'] | ['Conditional Sig-Wasserstein GANs for Time Series Generation'] | lib/algos/base.py train.py lib/arfnn.py lib/algos/__init__.py lib/utils.py hyperparameters.py lib/algos/gans.py lib/algos/sigcwgan.py lib/__init__.py lib/data.py evaluate.py lib/plot.py lib/test_metrics.py lib/algos/gmmn.py lib/augmentations.py compute_predictive_score compute_test_metrics evaluate_benchmarks evaluate_generator get_algo_config complete_experiment_summary get_top_dirs set_seed get_dataset_configuration main get_algo_config get_algo run SimpleGenerator ArFNN ResidualBlock ResFNN AddLags BaseAugmentation apply_augmentations get_time_vector _apply_augmentation augment_path_and_compute_signatures lead_lag_transform_with_time LeadLag get_standard_augmentation SignatureConfig Scale Concat Cumsum lead_lag_transform cat_lags load_pickle get_equities_dataset get_arch_dataset download_mit_ecg_dataset get_var_dataset download_man_ahl_dataset get_data rolling_window StandardScalerTS Pipeline get_mit_arrythmia_dataset compare_hists plot_summary create_summary plot_signature compare_acf compare_cross_corr set_style savefig SkewnessLoss acf_torch ACFLoss LevEffLoss MeanLoss kurtosis_torch lev_eff_torch StdLoss KurtosisLoss CrossCorrelLoss histogram_torch HistoLoss Loss skew_torch cacf_torch load_pickle sample_indices pickle_it to_numpy is_multivariate get_standard_test_metrics BaseAlgo BaseConfig RCGAN CGANTrainer GAN toggle_grad TimeGAN compute_grad2 RCWGAN CWGAN _partial_mmd pairwise_distance median_pairwise_distance GMMN mmd_loss _rbf SigCWGAN sample_sig_fake calibrate_sigw1_metric SigCWGANConfig sigcwgan_loss score reshape copy to_numpy LinearRegression fit dict is_multivariate item load_pickle is_multivariate list plot_summary savefig load_state_dict to update compute_test_metrics plot close item manual_seed get_algo_config join compute_predictive_score BaseConfig calibrate_sigw1_metric dict compare_cross_corr to_numpy int float join listdir print to_csv evaluate_generator append DataFrame complete_experiment_summary get_top_dirs seed manual_seed get_algo_config pickle_it get_data device seed set_seed p savefig dirname to q state_dict format create_summary plot plot_losses training_loss get_algo join int G print makedirs fit chain product set_seed use_cuda print download_man_ahl_dataset BaseConfig initial_seed get_dataset_configuration datasets mkdir num_seeds algos range run repeat_interleave cat to repeat_interleave cat device append list range _apply_augmentation apply_augmentations augmentations set_index concatenate reshape index read_csv intersection transform float Pipeline log values transform multi_AR append float range get_pipeline transform float get_raw_data Pipeline list transform concatenate min append float Pipeline log get_equities_dataset get_arch_dataset get_var_dataset rolling_window get_mit_arrythmia_dataset get remove extractall close ZipFile get remove extractall close ZipFile set_visible subplots set_yscale grid flatten hist set_style set_ylabel legend set_major_locator subplots MaxNLocator plot set_xlabel grid mean set_ylabel set_style legend fill_between numpy std range compare_hists subplots min compare_acf text_box range T subplots set_title matshow reshape min subplots_adjust corrcoef add_axes colorbar max T grid plot join close var list mean append range list get_lower_triangular_indices mean append std range cat list mean pow append std range mean pow std mean pow var float arange cuda is_multivariate append requires_grad_ parameters pow size sum view size pow unsqueeze sum sqrt reshape einsum pairwise_distance detach _partial_mmd compute_sig_past compute_sig_future device to LinearRegression fit sample requires_grad_ mean compute_sig_future | # The authors' official PyTorch SigCWGAN implementation. This repository is the official implementation of [Conditional Sig-Wasserstein GANs for Time Series Generation] Authors: Paper Link: ## Requirements To setup the conda enviroment: ```setup conda env create -f requirements.yml ``` ## Datasets | 978 |
Silin159/PARG | ['response generation', 'data augmentation'] | ['Paraphrase Augmented Task-Oriented Dialog Generation'] | CamRest676/filter_eval.py MultiWOZ/eval.py MultiWOZ/build_para.py MultiWOZ/preprocess.py CamRest676/data_analysis.py MultiWOZ/ontology.py MultiWOZ/db_ops.py MultiWOZ/utils.py CamRest676/reader.py MultiWOZ/data_analysis.py MultiWOZ/state_act_mapping.py MultiWOZ/clean_dataset.py CamRest676/analysis_multi.py CamRest676/test.py MultiWOZ/filter_eval.py MultiWOZ/para_analysis.py MultiWOZ/model.py CamRest676/model.py CamRest676/tsd_net.py MultiWOZ/config.py CamRest676/config.py CamRest676/test_write.py CamRest676/analysis.py MultiWOZ/reader.py CamRest676/handle.py MultiWOZ/damd_net.py CamRest676/metric.py _Config realization realization_multiwoz build_delex_group_multiwoz find_para_multiwoz build_delex_group slots_match_multiwoz delexicalisation slots_match find_para edit_distance formalize ldp filter_punct similar metric_handler CamRestEvaluator BLEUScorer setsim MultiWOZEvaluator setsub GenericEvaluator report main Model get_glove_matrix MultiWOZReader pad_sequences clean_replace _ReaderBase CamRest676Reader get_sparse_input_aug SimpleDynamicEncoder toss_ Attn Paraphrase init_gru get_sparse_selective_input BSpanDecoder nan ActDecoder ParaDecoder cuda_ ResponseDecoder TSD clean_time clean_slot_values clean_text _Config toss_ Attn init_gru update_input biGRUencoder ActSpanDecoder BeamSearchNode ResponseDecoder Attn_Para DomainSpanDecoder LayerNormalization Copy ParaDecoder cuda_ get_one_hot_input label_smoothing get_sparse_input_aug Paraphrase MultiLayerGRUwithLN nan SimpleDynamicEncoder BeliefSpanDecoder ActSelectionModel ActDecoder DAMD get_final_scores analysis MultiWozDB MultiWozEvaluator BLEUScorer edit_distance formalize ldp filter_punct main parse_arg_cfg Model realization realization_multiwoz build_delex_group_multiwoz find_para_multiwoz build_delex_group slots_match_multiwoz delexicalisation slots_match find_para DataPreprocessor get_db_values preprocess_db pad_sequences _ReaderBase MultiWozReader dialog_turn_state_analysis get_glove_matrix py2np position_encoding_init Vocab padSeqs f1_score write_dict deepcopy delexicalisation zip append range enumerate deepcopy items split append range enumerate realization shuffle enumerate split realization_multiwoz shuffle enumerate split append sub enumerate str sub split append enumerate join list reverse append keys enumerate split append zip split append zip split split lower split min range len similar append run_metrics dump add_argument file ArgumentParser parse_args ev_class cuda_device model tuple ArgumentParser cuda seed dtype str load_model load_model_para set_device Model getattr count_params parse_args para_start_epoch current_device load_glove_embedding init_handler format cfg start_epoch eval info manual_seed setattr type add_argument mode reinforce_tune train split clean_replace_single asarray tuple min astype append max enumerate len info readlines astype float32 close glove_path average array split encode std open transpose fill zeros float range decode encode transpose fill zeros float range size all_weights orthogonal reset_parameters range hidden_size items replace strip lower sub clean_time sub get replace clean_text orthogonal_ apply LongTensor copy vocab_size to_one_hot cuda_ type range logsoftmax fill_ view insert size logsumexp LogSoftmax enumerate stack cat nonzero append range einsum len get_one_hot_input deepcopy cuda_ long get items list set add lower loads mkdir findall ZipFile append dtype setattr tuple cfg getattr type split exp_no eval_load_path batch_size parse_arg_cfg exp_path save_log exp_domains early_stop_count loads vocab_path_eval weight_decay_count lr mkdir model_path _init_logging_handler join read items save_vocab load items get print strip clean_slot_values append split load all_domains print endswith loads MultiWozReader str sorted OrderedDict append get lower ZipFile enumerate join items isinstance print sort aspan_to_act_list split len max len asarray print tuple min astype append max enumerate len array cos sin | # PARG This is the code for paper [Paraphrase Augmented Task-Oriented Dialog Generation](https://arxiv.org/abs/2004.07462). If you use any source code or dataset included in this repo in your work, please cite this paper: ``` @inproceedings{gao2020paraphrase, title={Paraphrase Augmented Task-Oriented Dialog Generation}, author={Gao, Silin and Zhang, Yichi and Ou, Zhijian and Yu, Zhou}, booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, pages={639--649}, year={2020} | 979 |
SilvioGiancola/SoccerNet-code | ['action detection', 'action classification', 'action recognition'] | ['SoccerNet: A Scalable Dataset for Action Spotting in Soccer Videos'] | src/feature_extraction/i3d-feat-extract/extract_i3d_spatial_features.py src/ReadSplitData.py src/Detection/Evaluation/utils.py src/Classification/Trainer.py src/Detection/loupe.py src/Classification/Network.py src/Classification/Dataset.py src/Detection/Evaluation/get_detection_performance_spotting.py src/feature_extraction/FeatureExtractorResNet.py src/Detection/Network.py src/feature_extraction/FeatureExtractorBase.py src/Classification/ClassificationMinuteBased.py src/feature_extraction/main.py src/Detection/ActivityNet-Release1.3Proposals.py src/Classification/loupe.py src/Detection/utils.py src/Detection/Dataset.py src/Detection/Evaluation/eval_detection.py src/Detection/Evaluation/get_detection_performance_Soccer.py src/ReadAllData.py src/Detection/ClassificationSecondBased.py src/Detection/Evaluation/get_detection_performance.py src/feature_extraction/FeatureExtractorC3D.py src/Classification/infer_testing.py src/Detection/Evaluation/get_detection_performance_RecallPrecision.py src/Detection/Evaluation/get_detection_performance_Soccer_metric2.py src/Detection/Evaluation/get_detection_performance_tIoU.py src/ReadCommentaries.py src/ReadData.py ReadLabels ReadCommentaries ReadFeatures ReadCommentaries ReadLabels ReadFeatures ReadLabels ReadCommentaries ReadFeatures main dataset main PoolingBaseModel NetVLAD SoftDBoW NetFV NetRVLAD networkMinutes Trainer plot_metric run_evaluation main dataset PoolingBaseModel NetVLAD SoftDBoW NetFV NetRVLAD networkMinutes get_blocked_videos segment_iou interpolated_prec_rec wrapper_segment_iou main parse_input main parse_input main parse_input main parse_input main parse_input main parse_input get_blocked_videos segment_iou interpolated_prec_rec wrapper_segment_iou FeatureExtractorBase FeatureExtractorC3D conv_block resnet152_model FeatureExtractorResNet Scale identity_block main load_and_preprocess_input center_crop scale_image load join load join open Trainer dataset features set_value max_epoch networkMinutes training VLAD_k loadValidationDataset csv_file LR loadTrainingDataset jobid loadTestingDataset print PCA to_csv imbalance train network read_csv dataset_Minute ANETproposal proposals_per_video evaluate avg_recall recall show subplot zeros_like plot xlabel get_yticklabels grid get_xticklabels trapz ylabel ylim figure legend setp get_legend_handles_labels range enumerate model testing Request urlopen format sum hstack max minimum clip astype maximum empty segment_iou xrange evaluate ANETdetection add_argument ArgumentParser evaluateRecallPrecision str add str add conv_block Model load_weights identity_block Input range shape int replace print vreader center_crop len scale_image stack ffprobe append range split global_variables float32 placeholder Saver | # SoccerNet: A Scalable Dataset for Action Spotting in Soccer Videos ## [DEPRECATED] Please visit https://github.com/SilvioGiancola/SoccerNetv2-DevKit for an updated version of that repository CVPR'18 Workshop on Computer Vision in Sports Available at [openaccess.thecvf.com](http://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w34/Giancola_SoccerNet_A_Scalable_CVPR_2018_paper.pdf) ```bibtex @InProceedings{Giancola_2018_CVPR_Workshops, author = {Giancola, Silvio and Amine, Mohieddine and Dghaily, Tarek and Ghanem, Bernard}, title = {SoccerNet: A Scalable Dataset for Action Spotting in Soccer Videos}, booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, | 980 |
SimiPixel/pool_based_active_learning | ['active learning'] | ['libact: Pool-based Active Learning in Python'] | poolAL/query_strategies/nearest_neighbour_criterion.py poolAL/query_strategies/core/__init__.py poolAL/query_strategies/active_learning_by_learning.py poolAL/query_strategies/__init__.py poolAL/query_strategies/representative_sampling.py poolAL/evaluate/calc_score_parallel.py poolAL/query_strategies/uncertainty_sampling.py poolAL/query_strategies/expected_error_reduction.py poolAL/query_strategies/core/query_strategy.py poolAL/query_strategies/core/models/svm.py poolAL/visualize/visualizer_grid.py poolAL/visualize/__init__.py poolAL/query_strategies/core/models/random_forest.py poolAL/evaluate/__init__.py poolAL/query_strategies/core/utils.py poolAL/query_strategies/class_balance_sampling.py poolAL/query_strategies/density_weighted_uncertainty_sampling.py poolAL/query_strategies/core/test_datasets.py poolAL/query_strategies/mean_distance_sampling.py poolAL/query_strategies/core/models/logistic_regression.py poolAL/query_strategies/core/models/model.py poolAL/visualize/visualizer.py poolAL/query_strategies/dynamic_ensemble_active_learning.py poolAL/query_strategies/rank_sampling.py poolAL/query_strategies/cluster_margin_sampling.py poolAL/query_strategies/random_sampling.py poolAL/evaluate/calc_score.py poolAL/query_strategies/fisher_information_sampling.py poolAL/query_strategies/core/dataset.py setup.py poolAL/query_strategies/query_by_committee.py CalcScore CalcScoreParallel ActiveLearningByLearning ClassBalanceSampling ClusterMarginSampling normalized_pw DensityWeightedUncertaintySampling DynamicEnsembleActiveLearning REXP4 ExpectedErrorReduction FisherInformationSampling MeanDistanceSampling NearestNeighbourCriterion QueryByCommittee kl_div RandomSampling RankSampling RepresentativeSampling UncertaintySampling Dataset QueryStrategy data_aux_scoure BinaryCircle BinaryCircleMargin LinearDecisionBoundary entropy unzipit euclidian_metric get_grid shuffle sort_by_2nd replace_labels zipit manhattan_metric MLogit rbf_kernel sigma Model RFC Aux SVM Visualizer VisualizerGrid score make_query clf round seed clear_output array append sum range update concatenate shuffle mean pop deepcopy time print reshape zeros train Dataset len seed pop join asarray clear cpu_count close map mean array Pool range len sqrt pw shape empty log range append rand int T arange Dataset hstack shuffle choice zipit array rand delete append array range len seed rand sign append einsum sort tolist seed arange class_balance tolist __len__ choice index array sk_shuffle Dataset concatenate reshape min fill copy flatten shape linspace append meshgrid max range deepcopy unique_labels astype get_mask_of_label int64 append | # pool_based_active_learning ## Installation Run in Terminal ``` pip install git+https://github.com/SimiPixel/pool_based_active_learning.git ``` ## Available Query strategies - QueryStrategy objects - RandomSampling - UncertaintySampling | 981 |
SimonWang00/psenet.tf2 | ['optical character recognition', 'scene text detection', 'curved text detection'] | ['Shape Robust Text Detection with Progressive Scale Expansion Network', 'Shape Robust Text Detection with Progressive Scale Expansion Network'] | train_fpn_resnet.py callbacks.py dataset/find_no_mark.py pse/__init__.py text_detector.py gen_dataset.py generator.py settings.py resnet_net.py utils.py dataset/json_to_txt.py __init__.py feature_pyramid.py dataset/txt_to_json.py model.py ESCallback TensorBoardCallback CustomLoaderCallback LRTensorBoard CheckpointSaveCallback CustomSavingCallback LRCallback top_down_and_merge_layer down_top_layer build_feature_pyramid unpooling convert_label_to_id convert_id_to_label scale_expand_kernels save_MTWI_2108_resault del_allfile text_porposcal fit_minarearectange_2 Generator scale_expand_kernel filter_label_by_area fit_boundingRect_2 BatchIndices ufunc_4 fit_boundingRect fit_minarearectange shrink_polygon read_dataset read_txt gen_dataset create_dataset cal_di iou dice_loss __model build_loss ohem_batch build_iou ohem_single mean_iou identity_block resnet_block_3 build_resnet text_detector_model resize_image2 show_rects recover_size resize_image predict val_one_step my_optimizer train_one_step train scale_expand_kernels del_allfile adjust_side scale_expand_kernel estimate_skew_angle fit_boundingRect save_MTWI_2108_resault text_porposcal rotate_cut_img translate_4_to_8 filter_label_by_area fit_boundingRect_2 fit_minarearectange convert_id_to_label BatchIndices seg_box_img convert_label_to_id fit_minarearectange_2 ufunc_4 batch_find find_no_mark list_dir save_nomark_file save_mark_file json_to_txt resize_img load_img record_txt list_dir batch_json_to_txt load_json resize_image wapper_annoation compute_ratio exract_jsonPath labelme_format extract_shape record_json_format exract_imagePath mark_format batch_txt_to_json load_txt_to_json extract_zb load_imageData load_file list_dir pse ModelCheckpoint build_resnet top_down_and_merge_layer concat down_top_layer glob join remove zeros items ones items range count_nonzero ufunc_4 range connectedComponents astype scale_expand_kernel filter_label_by_area T print boxPoints int0 contourArea append minAreaRect range append range T boundingRect append array range T boundingRect append array range append float round split glob join read_txt append arcLength range contourArea PyclipperOffset JT_ROUND append AddPath ET_CLOSEDPOLYGON Execute join drawContours convert_label_to_id shrink_polygon imwrite ones save zeros imread cal_di array enumerate int read_dataset del_allfile gen_dataset len Model Input build_feature_pyramid dice_loss float32 ohem_batch logical_or cast bool range map_fn reduce_sum reduce_mean reduce_sum equal cast float32 reduce_sum format isinstance iou range cast str str identity_block resnet_block_3 range restore __model latest_checkpoint Checkpoint CheckpointManager int int append array rectangle imwrite scale_expand_kernels text_detector_model format text_porposcal print reshape adjust_side INTER_AREA logical_and recover_size fit_boundingRect_2 resize_image2 get_text_line show_rects resize append len trainable_variables apply_gradients gradient model build_loss float32 reduce_mean cast mean_iou Adam SGD __next__ __model Generator Adam build_iou CheckpointSaveCallback TensorBoardCallback scalar fit append float round crop pi var percentile_filter zoom rotate mean amin shape append resize_im range max clip amax append format imwrite enumerate glob join append replace mkdir copy mkdir copy find_no_mark list_dir save_nomark_file save_mark_file open imread resize str write open get replace record_txt loads load_json wapper_annoation json_to_txt resize_img format load_img replace imwrite print resize_image list_dir compute_ratio append isinstance replace replace size open load_file append float round split b64encode read str exract_jsonPath extract_shape labelme_format record_json_format exract_imagePath mark_format extract_zb load_imageData list_dir load_txt_to_json connectedComponents uint8 astype append pse_cpp array range len | # psenet.tf2 基于tensorflow2.0框架实现的psenet算法,应用场景:图像文本检测,发票文本检测等,内含数据集制作过程。 ## Introduction 渐进尺度扩展网络(PSENet)是一种性能很好地检测自然场景图像下任意形状文本的文本检测器。 ## Requirements - python3.+ - tensorflow2.+ - opencv-python ## Datasets ### 1.文本标注 | 982 |
SinaMohseni/Saliency-Based-Failure-prediction-for-Autonomous-Vehicle | ['autonomous driving'] | ['Predicting Model Failure using Saliency Maps in Autonomous Driving Systems'] | networks/pilotNet_5_layer_128_gray.py networks/pilotNet_5_student_gray.py evaluation.py get_saliency_dataset.py run_student.py networks/pilotNet_5_layer_128_rgb.py networks/pilotNet_5_student_grb.py train_main.py train_student.py run_main.py img_prc/image_proc_gray.py img_prc/dataset_synth.py visual_back_prop.py img_prc/dataset_synth_shrt.py img_prc/dataset_synth_proc.py evaluation run_model run_student show_activation training_set random_flip random_brightness augument_C augument_LR pick_one load_image flip dataset_synth random_flip random_brightness augument_C augument_LR pick_one load_image flip dataset_synth random_flip random_brightness augument_C augument_LR pick_one load_image flip dataset_synth _bias_variable _weight_variable _conv2d PilotNet _weight_variable _conv2d _bias_variable _conv2d_norm PilotNet _bias_variable _weight_variable _conv2d PilotNet _bias_variable _weight_variable _conv2d PilotNet destroyAllWindows open map call shape append WINDOW_NORMAL sum imread close splitlines fill float empty namedWindow print dstack moveWindow zeros len namedWindow close WINDOW_NORMAL shape moveWindow splitlines Saver imread PilotNet destroyAllWindows open namedWindow close dstack WINDOW_NORMAL shape moveWindow splitlines Saver fill zeros imread empty PilotNet destroyAllWindows open subplot str imresize imshow title Saver figure _generate_feature_image imread destroyAllWindows rand augument_C augument_LR append range len INTER_AREA resize choice flip COLOR_RGB2HSV rand cvtColor random_brightness random_flip pick_one random_brightness flip load_image permutation load_image truncated_normal constant | # Saliency-Based-Failure-prediction-for-Autonomous-Vehicle
This repository the implementation of our UDL Workshop submission at ICML 2019 https://arxiv.org/pdf/1905.07679.pdf
| 983 |
Singh07-Shubham/pytorch-object-detection | ['multiple object tracking'] | ['Simple Online and Realtime Tracking'] | utils/utils.py utils/datasets.py object_tracker.py utils/parse_config.py models.py sort.py YOLOLayer create_modules Darknet EmptyLayer detect_image KalmanBoxTracker iou Sort convert_bbox_to_z associate_detections_to_trackers convert_x_to_bbox parse_args ImageFolder ListDataset parse_data_config parse_model_config compute_ap build_targets bbox_iou_numpy to_categorical weights_init_normal load_classes bbox_iou non_max_suppression pop int YOLOLayer Sequential ZeroPad2d MaxPool2d add_module Conv2d ModuleList EmptyLayer Upsample append BatchNorm2d LeakyReLU sum enumerate unsqueeze_ Variable Compose min type float round minimum maximum float sqrt linear_assignment iou concatenate reshape append zeros empty enumerate add_argument ArgumentParser rstrip strip open startswith append split dict strip split open data normal_ __name__ constant_ concatenate size maximum sum range clamp min max minimum eps expand_dims maximum data sort new squeeze size shape unsqueeze cuda unique bbox_iou append max is_cuda cat enumerate int fill_ FloatTensor ones concatenate size range unsqueeze bbox_iou zeros argmax log | # i have used car_env for this project # PyTorch Object Detection and Tracking Object detection in images, and tracking across video frames Full story at: https://towardsdatascience.com/object-detection-and-tracking-in-pytorch-b3cf1a696a98 References: 1. YOLOv3: https://pjreddie.com/darknet/yolo/ 2. Erik Lindernoren's YOLO implementation: https://github.com/eriklindernoren/PyTorch-YOLOv3 3. YOLO paper: https://pjreddie.com/media/files/papers/YOLOv3.pdf 4. SORT paper: https://arxiv.org/pdf/1602.00763.pdf | 984 |
Sivaneshmsc/FashionMNIST | ['data augmentation'] | ['DENSER: Deep Evolutionary Network Structured Representation'] | utils/helper.py configs.py benchmark/convnet.py app.py benchmark/runner.py utils/argparser.py utils/mnist_reader.py visualization/project_zalando.py start_s3_sync get_json_logger touch touch_dir _get_logger main cnn_model_fn PredictJob JobWorker JobManager get_args_request parse_arg get_args_cli now_int upload_result_s3 get_sprite_image invert_grayscale create_sprite_image vector_to_matrix_mnist UploadS3Thread load_mnist UploadS3Thread start Event dirname makedirs makedirs setFormatter touch_dir DEBUG getLogger addHandler StreamHandler Formatter touch setLevel INFO FileHandler setFormatter getLogger addHandler Formatter touch setLevel INFO FileHandler dense max_pooling2d dropout one_hot minimize reshape GradientDescentOptimizer conv2d softmax_cross_entropy asarray evaluate print Estimator shuffle labels images numpy_input_fn train range read_data_sets int append items defaultdict utcfromtimestamp info int isinstance ones sqrt ceil array range vector_to_matrix_mnist invert_grayscale join | # Fashion-MNIST [](https://github.com/zalandoresearch/fashion-mnist/) [](https://gitter.im/fashion-mnist/Lobby?utm_source=share-link&utm_medium=link&utm_campaign=share-link) [](README.zh-CN.md) [](README.ja.md) [](https://opensource.org/licenses/MIT) <details><summary>Table of Contents</summary><p> * [Why we made Fashion-MNIST](#why-we-made-fashion-mnist) * [Get the Data](#get-the-data) * [Usage](#usage) | 985 |
SiyuanQi/gpnn | ['human object interaction detection'] | ['Learning Human-Object Interactions by Graph Parsing Neural Networks'] | src/python/datasets/VCOCO/feature_model.py src/python/datasets/HICO/extract_vgg_features.py src/python/datasets/VCOCO/metadata.py src/python/models/GPNN_VCOCO.py src/python/units/ConvLSTM.py src/python/cad120_graph.py src/python/datasets/__init__.py src/python/models/GPNN_CAD.py src/python/datasets/HICO/parse_features.py src/python/datasets/HICO/hico_config.py src/python/datasets/VCOCO/__init__.py src/python/datasets/VCOCO/plot_gt_hoi.py src/python/vcoco_graph.py src/python/visualization_utils.py src/python/cad120.py src/python/datasets/VCOCO/vcoco_config.py src/python/units/MessageFunction.py src/python/datasets/VCOCO/finetune.py src/python/models/GPNN_HICO.py src/python/datasets/CAD120/parse_features_prediction.py src/python/logutil.py src/python/models/__init__.py src/python/datasets/CAD120/parse_features.py src/python/hico_graph.py src/python/datasets/utils.py src/python/datasets/CAD120/metadata.py src/python/datasets/HICO/finetune.py src/python/datasets/VCOCO/extract_roi_features.py src/python/units/__init__.py src/python/datasets/VCOCO/roi_pooling.py src/python/datasets/CAD120/cad120.py src/python/vcoco.py src/python/datasets/VCOCO/vcoco.py src/python/hico.py src/python/datasets/HICO/roi_pooling.py src/python/utils.py src/python/datasets/HICO/extract_roi_features.py src/python/datasets/HICO/find_rare_hoi.py src/python/units/LinkFunction.py src/python/datasets/HICO/roi_feature_model.py src/python/datasets/HICO/metadata.py src/python/units/UpdateFunction.py src/python/datasets/HICO/hico.py src/python/datasets/VCOCO/parse_features.py src/python/datasets/VCOCO/extract_vgg_features.py src/python/datasets/CAD120/cad120_config.py src/python/units/ReadoutFunction.py src/python/config.py src/python/cad120_prediction.py validate parse_arguments evaluation main train main train parse_arguments validate validate parse_arguments evaluation main train set_logger Paths validate get_indices compute_mean_avg_prec gen_test_result weighted_loss parse_arguments evaluation loss_fn main train validate criterion parse_arguments evaluation main train main AverageMeter Logger get_cad_data plot_all_affordance_segmentations plot_segmentation get_vcoco_data parse_result plot_all_activity_segmentations visualize_vcoco_result plot_confusion_matrix get_label_bar main to_variable get_hico_data plot_box_with_label append_results validate compute_mean_avg_prec train weighted_loss main parse_arguments evaluation loss_fn vcoco_evaluation get_vcocoeval append_result validate criterion parse_arguments evaluation main train draw_bounding_boxes_on_image_array save_image_array_as_png encode_image_array_as_png_str draw_mask_on_image_array visualize_hoi draw_bounding_box_on_image_array draw_bounding_box_on_image draw_bounding_boxes_on_image draw_keypoints_on_image draw_keypoints_on_image_array draw_hoi_line visualize_boxes_and_labels_on_image_array collate_fn_cad load_best_checkpoint collate_fn_vcoco save_checkpoint main collate_fn_hico main CAD120 parse_arguments set_logger Paths main main parse_colon_seperated_features read_features collect_data main parse_colon_seperated_features read_features collect_data get_info get_valid_roi combine_box main get_model extract_features get_info Vgg16 combine_box main get_model extract_features collect_hoi_stats split_testing_set main find_rare_hoi main HICO parse_arguments set_logger Paths main action_to_obj_idx read_features parse_classes compute_area read_features_ collect_data get_intersection main get_node_index get_valid_roi Vgg16 perturb_box HICO combine_box parse_classes compute_iou compute_area perturb_gt_box Densenet get_intersection main Resnet152 roi_pooling adaptive_max_pool AdaptiveMaxPool2d get_info combine_box main get_model extract_features get_info Vgg16 combine_box main get_model extract_features get_valid_roi Vgg16 perturb_box combine_box parse_classes compute_iou compute_area perturb_gt_box VCOCO Densenet get_intersection main Resnet152 main visualize_roi parse_features parse_classes compute_iou compute_area collect_data get_intersection main vcoco_evaluation get_vcocoeval append_result get_node_index main plot_set plot_box_with_label roi_pooling adaptive_max_pool AdaptiveMaxPool2d main parse_arguments VCOCO set_logger Paths main main GPNN_CAD main GPNN_HICO main GPNN_VCOCO ConvLSTMCell ConvLSTM main LinkFunction main MessageFunction main ReadoutFunction UpdateFunction main list extend numpy argmax range get_cad_data validate load_best_checkpoint save_checkpoint Logger cuda seed list Adam strftime MSELoss epochs append range format inf param_groups GPNN_CAD mean start_epoch lr manual_seed log_root join time print min parameters train step array update time format criterion model backward print size AverageMeter zero_grad log_value avg evaluation to_variable step cuda enumerate model to_variable cuda visualize list affordances plot_all_activity_segmentations precision_recall_fscore_support update format size tmp_root log_value eval avg plot_confusion_matrix enumerate subactivities join time plot_all_affordance_segmentations print AverageMeter extend confusion_matrix evaluation makedirs add_argument Paths ArgumentParser LinkFunction setFormatter getLogger addHandler makedirs Formatter dirname DEBUG setLevel FileHandler exp vstack empty enumerate Variable size cuda ones list view where mean weighted_loss zip append numpy max range nansum len average_precision_score get_hico_data GPNN_HICO gen_test_result MultiLabelSoftMarginLoss int compute_mean_avg_prec empty loss_fn len str len compute_mean_avg_prec empty vis_top_k loss_fn list action_to_obj_idx action_classes append numpy range len model vstack get_indices to_variable cuda str list append range concatenate tmp_root eval empty enumerate items time join print index zfill dict savemat Variable size cuda ones permute permute cuda load join permutation format print len tmp_root DataLoader CAD120 open empty enumerate update subplot show set_xticklabels set_yticklabels close GridSpec imshow set_xticks savefig figure get_label_bar tick_params enumerate len join list plot_segmentation append enumerate makedirs join list plot_segmentation append range enumerate makedirs arange tick_params xticks max yticks show list imshow title savefig gca range format product astype tight_layout close print text len seed join format print HICO DataLoader data_root len join format feature_type print VCOCO DataLoader data_root len join list format items set_xticklabels endswith set_yticklabels astype axis close imshow savefig splitext zip gca imread plot_box_with_label LINE_AA putText tuple tolist FONT_HERSHEY_SIMPLEX rectangle join load_coco data_root parse_result format copy dict action_classes append vcoco_metadata range enumerate numpy append_result range join dump format eval_root _do_eval open cuda eval_root get_vcoco_data get_vcocoeval GPNN_VCOCO makedirs list append_results append_results visualize_vcoco_result vcoco_evaluation range convert fromarray uint8 BytesIO close getvalue save convert draw_bounding_box_on_image truetype line Draw isinstance text size rectangle ceil getsize max enumerate fromarray array draw_bounding_boxes_on_image copyto shape range draw_bounding_box_on_image draw_keypoints_on_image convert ellipse Draw tuple size zip fromarray ones_like list getrgb reshape convert logical_or any expand_dims composite int list defaultdict format items tuple draw_mask_on_image_array tolist min extend draw_bounding_box_on_image_array append draw_keypoints_on_image_array range line ellipse Draw empty range int list defaultdict format items tuple draw_mask_on_image_array tolist min extend draw_bounding_box_on_image_array draw_hoi_line append draw_keypoints_on_image_array range list FloatTensor ndim array append zeros max enumerate list FloatTensor ndim array append zeros max enumerate list FloatTensor ndim array append zeros max enumerate copyfile join save makedirs load join format print resume load_state_dict isfile cuda makedirs load permutation len tmp_root CAD120 open dict int basename range join list dump tmp_root dict open data_root append listdir makedirs collect_data Paths load join Vgg16 format Densenet tmp_root DataParallel load_state_dict Resnet152 features cuda join format data_root min max feature_network vstack save resize cuda list combine_box append imread range get_info format Compose astype empty enumerate join print makedirs extend transform zeros get_model numpy len extract_features Compose AdaptiveMaxPool2d zeros range load join list str format strip data_root append range join collect_hoi_stats data_root split_testing_set loadmat find_rare_hoi HICO data_root range get_intersection array range compute_area load join dump format print reshape parse_classes get_node_index save open zeros makedirs load join list format dump print reshape makedirs parse_classes dict get_node_index action_classes save open zeros range len read_features loadmat get_intersection compute_area rand array copy perturb_gt_box size adaptive_max_pool mul_ append float long range COCO getImgIds loadImgs load_coco load_vcoco tolist load attach_gt_boxes data_root vgg16 copy float adaptive_max_pool rand array copy perturb_box compute_iou warn load_coco action_classes save open load_vcoco list tolist len append get_node_index range dump format parse_classes get_vcocoeval append_result enumerate load join int print reshape attach_gt_boxes isnan dict data_root vcoco_evaluation zeros makedirs show join format imshow data_root imread parse_features set_yticklabels axis load_coco plot_box_with_label load_vcoco tolist imshow savefig gca imread range format set_xticklabels astype close copy tmp_root splitext loadImgs enumerate join reshape attach_gt_boxes makedirs dict data_root len plot_set VCOCO | SiyuanQi/gpnn | 986 |
Sj-Amani/Practical_Behavioral_Cloning | ['stochastic optimization'] | ['Adam: A Method for Stochastic Optimization'] | horizontal_flip.py middle_driving_data_preprocessing.py draw.py merge_all_preprocessed_data.py drive.py left_right_driving_data_preprocessing.py map_driving_data.py model.py video.py send_control connect telemetry moving_average moving_average zero_angle_steering generator get_model nn_model_training main BytesIO asarray b64decode ANTIALIAS print convert send_control resize float predict open print send_control emit cumsum int delete choice append enumerate len asarray ANTIALIAS convert resize append array ELU Lambda Sequential compile add Dense Convolution2D Flatten Dropout generator time to_json print fit_generator save_weights history summary get_model train_test_split sorted format print image_folder add_argument write_videofile ImageSequenceClip ArgumentParser parse_args fps | # Practical Behavioral Cloning .gif) ## Overview In this project, I will use CNN Deep Learnig to clone driving behavior. I will train, validate and test a model using Keras. The model will output a steering angle to an autonomous vehicle. I am using a [simulator](https://github.com/udacity/self-driving-car-sim) - Version 1, 12/09/16 - where you can steer a car around a track for data collection. You'll use image data and steering angles to train a neural network and then use this model to drive the car autonomously around the track. ## Goals The goals / steps of this project are the following: * Use the simulator to collect data of good driving behavior * Build, a convolution neural network in Keras that predicts steering angles from images (model.py) * Train and validate the model with a training and validation set (model.py) | 987 |
SkafteNicki/john | ['active learning', 'gaussian processes'] | ['Reliable training and estimation of variance networks'] | experiment_gradient.py experiment_contrib.py locality_sampler.py experiment_vae.py toy_regression.py toy_vae.py toy_weather.py utils.py experiment_regression.py experiment_active_learning.py argparser gp john mcdnn ensnn nn nnmv jnmv local_batchify argparser nnlsmv jnls nnls nn jn jnlsmv gpnn argparser gp john mcdnn bnn sgp ensnn rbfnn nn argparser john BatchReshape vae BatchFlatten basemodel local_batchify generate_data locality_sampler2 get_pseupoch gen_Qw locality_sampler local_batchify2 dropout plot generate_data gp neuralnet john plot2 bnn ens_john ensemble john rbf vae savefig basemodel set_axes dropout plot plot3 gp neuralnet john plot2 get_data safefloat bnn ensemble add_argument_group parse_args add_argument ArgumentParser RBF optimize reshape pi flatten dot sqrt mean GPRegression constrain_positive log predict normalize_y Sequential zero_grad device ReLU tensor cuda Adam set_postfix chain to sum next update close batchify Linear backward tqdm parameters Softplus step Sequential zero_grad ReLU device cuda Adam set_postfix to next update close mean batchify Linear backward tqdm parameters step Dropout normalize_y Sequential zero_grad normal_log_prob device ReLU tensor n_models cuda Adam set_postfix append chain to sum next range update close mean sqrt stack batchify Linear backward tqdm parameters Softplus step n_clusters normalize_y model KMeans locality_sampler2 zero_grad device tensor cuda Adam gen_Qw chain to next format concatenate eval item batchify GPNNModel iters backward print min cluster_centers_ float32 parameters train step fit gen_Qw int32 astype normal_log_prob sqrt normalize_y Sequential zero_grad normal_log_prob device ReLU tensor cuda Adam set_postfix chain to sum next update close sqrt Linear local_batchify backward tqdm parameters Softplus step normalize_y Sequential zero_grad normal_log_prob device ReLU tensor cuda Adam set_postfix to sum next update close sqrt batchify Linear backward tqdm parameters Softplus step normalize_y Sequential zero_grad normal_log_prob device ReLU tensor cuda Adam set_postfix to sum next update close sqrt Linear local_batchify backward tqdm parameters Softplus step n_clusters normalize_y model KMeans zero_grad device tensor cuda Adam set_postfix normal_log_prob_w_prior to next sum update concatenate close sqrt batchify GPNNModel backward min cluster_centers_ float32 tqdm parameters step fit n_clusters normalize_y model KMeans zero_grad device tensor cuda Adam set_postfix normal_log_prob_w_prior to next sum update concatenate close sqrt GPNNModel local_batchify backward reshape min cluster_centers_ float32 tqdm parameters step fit n_clusters normalize_y model KMeans zero_grad device tensor cuda Adam set_postfix normal_log_prob_w_prior chain to next sum update concatenate close sqrt batchify GPNNModel backward min cluster_centers_ float32 tqdm parameters step fit n_clusters normalize_y model KMeans zero_grad device tensor cuda Adam set_postfix chain to next update concatenate close sqrt GPNNModel local_batchify backward reshape min cluster_centers_ float32 t_likelihood tqdm parameters step fit SparseGPRegression RBF optimize inducing reshape min pi flatten dot sqrt mean constrain_positive log predict flatten normal_log_prob mcmc range sqrt normalize_y sample_net var losses float32 placeholder mean sqrt normal_log_prob reset_default_graph sum n_clusters normalize_y Reciprocal PosLinear KMeans Sequential zero_grad normal_log_prob device ReLU tensor cuda Adam set_postfix chain to next sum update close sqrt batchify Linear RBF backward min cluster_centers_ float32 tqdm parameters step fit n_clusters normalize_y PosLinear KMeans Sequential zero_grad Sigmoid normal_log_prob device ReLU tensor cuda Adam set_postfix OneMinusX chain to next sum update concatenate close sqrt Norm2 batchify Linear RBF backward min cluster_centers_ float32 tqdm parameters step fit fit_transform sqrt reshape PCA t_likelihood query bincount KDTree flatten choice unique append randint len choice unique append randint len min maximum PCA fit_transform uniform f linspace randn gen_Qw int32 astype GaussianLikelihood format model backward print zero_grad Adam ExactMarginalLogLikelihood GPModel parameters eval item train step format NNModel model backward print zero_grad Adam parameters eval item step format NNModel model backward print zero_grad Adam mean parameters item step format NNModel model backward print zero_grad Adam parameters eval item step range int HamiltonianMonteCarlo std print reshape make_log_joint_fn array numpy sample_chain sum range get_pseupoch Tensor model KMeans locality_sampler2 zero_grad tensor Adam gen_Qw append chain sum range format concatenate mean eval stack item GPNNModel backward print cluster_centers_ parameters get_pseupoch Tensor train step fit str subplots set_fontsize concatenate get_yticklabels set_xlabel f axis get_xticklabels ravel set_ylabel savefig fill numpy str subplots set_fontsize plot get_yticklabels set_xlabel get_xticklabels isfinite set_ylabel savefig nan legend zip numpy str axis range float show int list readline T reader zip plot ones close flatten safefloat NaN split append array open numpy list xlabel ylabel figure str plot xlabel ylabel mean flatten numpy savefig figure legend zip abs format set_window_title title round | Code for paper: "Reliable training and estimation of variance networks" | 988 |
Skielex/slgbuilder | ['semantic segmentation'] | ['Sparse Layered Graphs for Multi-Object Segmentation'] | slgbuilder/qpbobuilder.py slgbuilder/graphobject.py slgbuilder/slgbuilder.py slgbuilder/orbuilder.py slgbuilder/__init__.py slgbuilder/bkbuilder.py setup.py BKBuilder GraphObject ORBuilder QPBOBuilder SLGBuilder | # Python package for building and solving Sparse Layered Graphs This package allows building and solving image segmentation problems such as [Markov Random Fields](https://en.wikipedia.org/wiki/Markov_random_field) (MRF) and [Sparse Layered Graphs](http://openaccess.thecvf.com/content_CVPR_2020/papers/Jeppesen_Sparse_Layered_Graphs_for_Multi-Object_Segmentation_CVPR_2020_paper.pdf) (SLG) using *s-t* graph cuts. The package itself is written purely in Python and contains logic for building graphs for both single- and multi-label problems. To solve the optimization problem it relies on a min-cut/max-flow algorithm (see [Solvers](#solvers)). ## Installation Install default package using `pip install slgbuilder` or clone the repository. See [Dependencies](#dependencies) for more. ## What is it for? The package is primarily targeted multi-label/multi-object image segmentation problems. Common uses include: - Surface detection - Constrained multi-surface detection - Object segmentation - Interacting multi-object segmentation | 989 |
Skylerzhang023/MediaAISuperCloneBoy | ['unity'] | ['Unity: A General Platform for Intelligent Agents'] | ml-agents-envs/mlagents/envs/communicator_objects/unity_rl_input_pb2.py ml-agents-envs/mlagents/envs/communicator_objects/unity_to_external_pb2.py gym-unity/gym_unity/envs/__init__.py ml-agents/mlagents/trainers/learn.py ml-agents-envs/mlagents/envs/communicator_objects/custom_observation_pb2.py ml-agents/mlagents/trainers/meta_curriculum.py ml-agents/mlagents/trainers/tests/test_barracuda_converter.py ml-agents/mlagents/trainers/ppo/models.py gym-unity/gym_unity/__init__.py ml-agents/mlagents/trainers/trainer_controller.py ml-agents/mlagents/trainers/tests/test_curriculum.py ml-agents/mlagents/trainers/action_info.py ml-agents-envs/mlagents/envs/communicator.py ml-agents-envs/mlagents/envs/communicator_objects/custom_reset_parameters_pb2.py ml-agents/mlagents/trainers/tests/test_ppo.py ml-agents-envs/mlagents/envs/tests/test_rpc_communicator.py ml-agents-envs/setup.py ml-agents-envs/mlagents/envs/rpc_communicator.py ml-agents/mlagents/trainers/tests/test_trainer_controller.py ml-agents/setup.py ml-agents/mlagents/trainers/barracuda.py ml-agents-envs/mlagents/envs/tests/test_envs.py ml-agents/mlagents/trainers/ppo/trainer.py ml-agents-envs/mlagents/envs/communicator_objects/unity_rl_output_pb2.py ml-agents-envs/mlagents/envs/communicator_objects/unity_rl_initialization_output_pb2.py ml-agents-envs/mlagents/envs/communicator_objects/unity_input_pb2.py ml-agents/mlagents/trainers/tests/test_meta_curriculum.py ml-agents/mlagents/trainers/bc/trainer.py ml-agents/mlagents/trainers/curriculum.py ml-agents-envs/mlagents/envs/communicator_objects/agent_action_proto_pb2.py ml-agents/mlagents/trainers/tests/test_policy.py ml-agents/mlagents/trainers/ppo/policy.py ml-agents-envs/mlagents/envs/communicator_objects/space_type_proto_pb2.py ml-agents/mlagents/trainers/tests/test_learn.py ml-agents-envs/mlagents/envs/communicator_objects/brain_parameters_proto_pb2.py ml-agents/mlagents/trainers/tests/test_demo_loader.py ml-agents/mlagents/trainers/models.py ml-agents/mlagents/trainers/__init__.py ml-agents-envs/mlagents/envs/communicator_objects/agent_info_proto_pb2.py ml-agents-envs/mlagents/envs/communicator_objects/environment_parameters_proto_pb2.py ml-agents-envs/mlagents/envs/tests/test_subprocess_unity_environment.py ml-agents/mlagents/trainers/exception.py gym-unity/gym_unity/tests/test_gym.py ml-agents/mlagents/trainers/buffer.py ml-agents/mlagents/trainers/bc/online_trainer.py ml-agents-envs/mlagents/envs/communicator_objects/engine_configuration_proto_pb2.py ml-agents/mlagents/trainers/ppo/__init__.py ml-agents/mlagents/trainers/tensorflow_to_barracuda.py ml-agents-envs/mlagents/envs/communicator_objects/unity_to_external_pb2_grpc.py ml-agents/mlagents/trainers/policy.py ml-agents-envs/mlagents/envs/mock_communicator.py gym-unity/setup.py ml-agents-envs/mlagents/envs/communicator_objects/unity_message_pb2.py ml-agents-envs/mlagents/envs/environment.py ml-agents-envs/mlagents/envs/communicator_objects/custom_action_pb2.py ml-agents/mlagents/trainers/bc/policy.py ml-agents-envs/mlagents/envs/base_unity_environment.py ml-agents/mlagents/trainers/bc/__init__.py ml-agents-envs/mlagents/envs/communicator_objects/unity_output_pb2.py ml-agents-envs/mlagents/envs/exception.py gym-unity/gym_unity/envs/unity_env.py ml-agents-envs/mlagents/envs/communicator_objects/header_pb2.py ml-agents-envs/mlagents/envs/brain.py ml-agents-envs/mlagents/envs/communicator_objects/demonstration_meta_proto_pb2.py ml-agents-envs/mlagents/envs/communicator_objects/resolution_proto_pb2.py ml-agents-envs/mlagents/envs/communicator_objects/__init__.py ml-agents-envs/mlagents/envs/subprocess_environment.py ml-agents/mlagents/trainers/demo_loader.py ml-agents-envs/mlagents/envs/__init__.py ml-agents/mlagents/trainers/tests/test_trainer_metrics.py ml-agents/mlagents/trainers/tests/test_buffer.py ml-agents-envs/mlagents/envs/communicator_objects/command_proto_pb2.py ml-agents/mlagents/trainers/trainer.py ml-agents-envs/mlagents/envs/socket_communicator.py ml-agents/mlagents/trainers/bc/models.py ml-agents/mlagents/trainers/bc/offline_trainer.py ml-agents/mlagents/trainers/tests/test_bc.py ml-agents-envs/mlagents/envs/communicator_objects/unity_rl_initialization_input_pb2.py ml-agents/mlagents/trainers/trainer_metrics.py UnityGymException ActionFlattener UnityEnv create_mock_vector_braininfo test_gym_wrapper test_multi_agent test_branched_flatten setup_mock_unityenvironment create_mock_brainparams ActionInfo BarracudaWriter compress Build sort lstm write fuse_batchnorm_weights trim gru Model summary Struct parse_args to_json rnn BufferException Buffer Curriculum make_demo_buffer load_demonstration demo_to_buffer CurriculumError MetaCurriculumError TrainerError create_environment_factory run_training prepare_for_docker_run try_create_meta_curriculum main load_config MetaCurriculum LearningModel Policy UnityPolicyException get_layer_shape pool_to_HW flatten process_layer process_model basic_lstm get_attr ModelBuilderContext order_by get_epsilon get_tensor_dtype replace_strings_in_list get_tensor_dims by_op remove_duplicates_from_list by_name convert strides_to_HW get_tensor_data gru UnityTrainerException Trainer TrainerController TrainerMetrics BehavioralCloningModel OfflineBCTrainer OnlineBCTrainer BCPolicy BCTrainer PPOModel PPOPolicy PPOTrainer get_gae discount_rewards test_barracuda_converter test_dc_bc_model test_cc_bc_model test_visual_cc_bc_model test_bc_policy_evaluate dummy_config test_visual_dc_bc_model assert_array test_buffer location default_reset_parameters test_init_curriculum_bad_curriculum_raises_error test_init_curriculum_happy_path test_increment_lesson test_get_config test_load_demo basic_options test_docker_target_path test_run_training test_init_meta_curriculum_happy_path test_increment_lessons_with_reward_buff_sizes default_reset_parameters MetaCurriculumTest test_increment_lessons measure_vals reward_buff_sizes test_set_all_curriculums_to_lesson_num test_get_config test_set_lesson_nums test_init_meta_curriculum_bad_curriculum_folder_raises_error more_reset_parameters basic_mock_brain test_take_action_returns_action_info_when_available basic_params test_take_action_returns_nones_on_missing_values test_take_action_returns_empty_with_no_agents test_rl_functions test_ppo_model_dc_vector_curio test_ppo_model_dc_vector_rnn test_ppo_model_cc_vector_rnn test_ppo_policy_evaluate test_ppo_model_cc_visual dummy_config test_ppo_model_dc_vector test_ppo_model_dc_visual test_ppo_model_cc_visual_curio test_ppo_model_dc_visual_curio test_ppo_model_cc_vector_curio test_ppo_model_cc_vector test_initialize_online_bc_trainer basic_trainer_controller assert_bc_trainer_constructed test_initialize_trainer_parameters_uses_defaults dummy_bad_config test_take_step_adds_experiences_to_trainer_and_trains test_initialize_trainer_parameters_override_defaults test_initialize_invalid_trainer_raises_exception test_start_learning_trains_until_max_steps_then_saves dummy_config dummy_offline_bc_config_with_override test_initialization_seed test_initialize_ppo_trainer test_start_learning_updates_meta_curriculum_lesson_number assert_ppo_trainer_constructed test_take_step_resets_env_on_global_done test_start_learning_trains_forever_if_no_train_model dummy_offline_bc_config trainer_controller_with_take_step_mocks trainer_controller_with_start_learning_mocks dummy_online_bc_config TestTrainerMetrics BaseUnityEnvironment safe_concat_np_ndarray BrainInfo BrainParameters safe_concat_lists Communicator UnityEnvironment UnityWorkerInUseException UnityException UnityTimeOutException UnityEnvironmentException UnityActionException MockCommunicator RpcCommunicator UnityToExternalServicerImplementation SocketCommunicator worker EnvironmentResponse EnvironmentCommand UnityEnvWorker SubprocessUnityEnvironment UnityToExternalServicer UnityToExternalStub add_UnityToExternalServicer_to_server test_initialization test_reset test_close test_step test_handles_bad_filename test_rpc_communicator_checks_port_on_create test_rpc_communicator_create_multiple_workers test_rpc_communicator_close mock_env_factory MockEnvWorker SubprocessEnvironmentTest create_mock_vector_braininfo sample UnityEnv setup_mock_unityenvironment step create_mock_brainparams create_mock_vector_braininfo UnityEnv setup_mock_unityenvironment step create_mock_brainparams setup_mock_unityenvironment create_mock_vector_braininfo create_mock_brainparams UnityEnv Mock Mock array range join isdir print replaceFilenameExtension add_argument exit verbose source_file ArgumentParser target_file sqrt topologicalSort list hasattr layers addEdge Graph print inputs set len list hasattr layers print filter match trim_model compile data layers print tensors float16 replace layers dumps data dtype layers isinstance print name tensors inputs outputs shape zip array_without_brackets to_json globals Build tanh mad tanh mul Build concat add sigmoid sub mad _ tanh mul Build concat add sigmoid mad Buffer reset_local_buffers number_visual_observations append_update_buffer append range enumerate make_demo_buffer load_demonstration number_steps read suffix BrainParametersProto from_agent_proto DemonstrationMetaProto ParseFromString AgentInfoProto append from_proto _DecodeVarint32 start_learning int str format create_environment_factory TrainerController external_brains put try_create_meta_curriculum load_config SubprocessUnityEnvironment MetaCurriculum reset_parameters keys chmod format basename isdir glob copyfile copytree prepare_for_docker_run replace int Process join docopt getLogger print run_training start Queue info append randint setLevel range endswith len HasField hasattr get_attr tensor_shape ndarray isinstance shape int_val bool_val float_val ListFields name ndarray isinstance str tensor_content ndarray product isinstance get_tensor_dtype print get_tensor_dims unpack int_val bool_val array float_val enter append add set name find_tensor_by_name split name lstm find_tensor_by_name find_forget_bias split get_layer_shape id Struct tensor hasattr name patch_data input_shapes out_shapes input get_attr append replace_strings_in_list tensors astype op zip enumerate print float32 patch_data_fn model_tensors map_ignored_layer_to_its_input co_argcount len items get_tensors hasattr name print process_layer eval ModelBuilderContext layers verbose Struct process_model open compress node GraphDef Model dims_to_barracuda_shape insert get_tensor_dims inputs MessageToJson ParseFromString cleanup_layers read memories print sort write trim summary size range reversed zeros_like asarray tolist discount_rewards join remove _get_candidate_names convert _get_default_tempdir dirname abspath isfile next BCPolicy evaluate close reset MockCommunicator reset_default_graph UnityEnvironment reset_default_graph reset_default_graph reset_default_graph reset_default_graph flatten list range len get_batch Buffer assert_array append_update_buffer make_mini_batch append reset_agent array range Curriculum Curriculum Curriculum make_demo_buffer load_demonstration dirname abspath MagicMock basic_options MagicMock MetaCurriculum assert_has_calls MetaCurriculumTest increment_lessons assert_called_with MetaCurriculumTest increment_lessons assert_called_with assert_not_called MetaCurriculumTest set_all_curriculums_to_lesson_num MetaCurriculumTest dict update MetaCurriculumTest MagicMock basic_mock_brain basic_params Policy BrainInfo get_action MagicMock basic_mock_brain basic_params Policy BrainInfo get_action MagicMock basic_mock_brain ActionInfo basic_params Policy BrainInfo get_action evaluate close reset MockCommunicator PPOPolicy reset_default_graph UnityEnvironment reset_default_graph reset_default_graph reset_default_graph reset_default_graph reset_default_graph reset_default_graph reset_default_graph reset_default_graph reset_default_graph reset_default_graph assert_array_almost_equal array discount_rewards dummy_offline_bc_config TrainerController assert_called_with BrainInfoMock basic_trainer_controller assert_bc_trainer_constructed dummy_offline_bc_config summaries_dir model_path keep_checkpoints BrainInfoMock basic_trainer_controller assert_bc_trainer_constructed summaries_dir model_path keep_checkpoints dummy_offline_bc_config_with_override BrainInfoMock basic_trainer_controller assert_bc_trainer_constructed summaries_dir model_path keep_checkpoints dummy_online_bc_config BrainInfoMock basic_trainer_controller assert_ppo_trainer_constructed summaries_dir dummy_config model_path keep_checkpoints initialize_trainers BrainInfoMock dummy_bad_config basic_trainer_controller MagicMock basic_trainer_controller start_learning assert_called_once MagicMock assert_not_called dummy_config trainer_controller_with_start_learning_mocks assert_called_once_with start_learning assert_called_once MagicMock dummy_config trainer_controller_with_start_learning_mocks assert_called_once_with start_learning MagicMock dummy_config trainer_controller_with_start_learning_mocks assert_called_once_with lesson MagicMock basic_trainer_controller take_step assert_called_once MagicMock trainer_controller_with_take_step_mocks assert_called_once MagicMock ActionInfo take_step outputs assert_not_called trainer_controller_with_take_step_mocks assert_called_once_with extend copy external_brains global_done payload reset _send_response reset_parameters env_factory step method_handlers_generic_handler add_generic_rpc_handlers UnityEnvironment close MockCommunicator UnityEnvironment close MockCommunicator reset str local_done print agents step close reset MockCommunicator UnityEnvironment len UnityEnvironment close MockCommunicator close RpcCommunicator close RpcCommunicator close RpcCommunicator | <img src="docs/images/unity-wide.png" align="middle" width="3000"/> <img src="docs/images/image-banner.png" align="middle" width="3000"/> # Unity ML-Agents Toolkit (Beta) [](docs/Readme.md) [](LICENSE) **The Unity Machine Learning Agents Toolkit** (ML-Agents) is an open-source Unity plugin that enables games and simulations to serve as environments for training intelligent agents. Agents can be trained using reinforcement learning, imitation learning, neuroevolution, or other machine learning methods through a simple-to-use Python API. We also provide implementations (based on TensorFlow) | 990 |
Slowpuncher24/mlhiphy_v2 | ['gaussian processes'] | ['Machine Learning of Linear Differential Equations using Gaussian Processes'] | conjugate_gradient/linesearch.py conjugate_gradient/_minimize.py conjugate_gradient/optimize.py line_search_armijo _zoom _nonmonotone_line_search_cruz scalar_search_armijo line_search_BFGS line_search_wolfe1 scalar_search_wolfe1 scalar_search_wolfe2 _quadmin LineSearchWarning _nonmonotone_line_search_cheng line_search_wolfe2 _cubicmin show_options check_grad _check_unknown_options approx_fprime fminbound rosen_hess fmin fmin_powell Brent fmin_bfgs golden _minimize_scalar_golden approx_fhess_p MemoizeJac _approx_fprime_helper _endprint is_array_scalar brent rosen _minimize_newtoncg vecnorm _LineSearchError _minimize_scalar_bounded _minimize_powell _minimize_bfgs _minimize_scalar_brent wrap_function _line_search_wolfe12 fmin_ncg _linesearch_powell fmin_cg OptimizeResult main _minimize_neldermead bracket rosen_hess_prod _minimize_cg brute OptimizeWarning rosen_der minimize_scalar minimize dot fprime scalar_search_wolfe1 isinstance intc dcsrch min phi derphi zeros range isinstance scalar_search_wolfe2 warn dot fprime _zoom min phi warn extra_condition derphi range _quadmin phi _cubicmin derphi atleast_1d phi dot scalar_search_armijo line_search_armijo sqrt abs phi f max clip f clip __delitem__ __setitem__ join list map warn keys sum asarray asarray zeros_like atleast_1d zeros diag len atleast_1d zeros len _minimize_neldermead callback _check_unknown_options reduce warn flatten list append range inf copy wrap_function take func OptimizeResult float pop print min argsort zeros array len zeros f range len pop fprime pop line_search_wolfe1 _minimize_bfgs callback norm vecnorm print _check_unknown_options f OptimizeResult flatten dot wrap_function _line_search_wolfe12 isnan eye isinf myfprime append len _minimize_cg callback norm vecnorm print _check_unknown_options f OptimizeResult flatten dot wrap_function _line_search_wolfe12 polak_ribiere_powell_step myfprime append len _minimize_newtoncg callback _check_unknown_options reduce flatten abs squeeze f approx_fhess_p append range eps fhess_p wrap_function _line_search_wolfe12 fhess min dot zeros len _minimize_scalar_bounded print _check_unknown_options sign sqrt func OptimizeResult _endprint abs max _minimize_scalar_brent optimize Brent _check_unknown_options set_bracket get_result _minimize_scalar_golden bracket _check_unknown_options range func func brent _minimize_powell callback list asarray inf print _check_unknown_options squeeze OptimizeResult copy flatten wrap_function _linesearch_powell func eye append abs range len print vecfunc tuple finish list argmin len shape range slice fun success isinstance print vectorize dict zeros ravel callable x args join print strip __import__ extend dict lower getattr __doc__ append split time fmin_powell print fmin_bfgs fmin_ncg fmin fmin_cg append range len asarray derivative setdefault isinstance ub lb old_bound_to_new Bounds warn dict lower MemoizeJac bool new_bounds_to_old callable get int setdefault isinstance warn dict lower callable | # mlhiphy_v2 This repository builds upon and improves the work we did in https://github.com/ratnania/mlhiphy. It is part of my master thesis, in which theory about Gaussian processes, a discussion of this setup and an analysis of the results can be found. Furthermore I write about other promising methods to infer parameters of PDEs with a concluding comparison. The foundational idea of using Gaussian processes for this type of inference were laid out in Maziar Raissi's paper on 'Machine Learning of Linear Differential Equations using Gaussian Processes' (https://arxiv.org/pdf/1701.02440.pdf) from Jan, 2017. Improvements include: * A much more efficient and stable implementation of the negative log-likelihood. This vastly improves the algorithm, as the optimization of the negative log-likelihood is at its center. This was done by utilizing the block matrix structure of the covariance matrix and by using the Cholesky decomposition. See for instance in: [2D example](http://nbviewer.jupyter.org/github/Slowpuncher24/mlhiphy_v2/blob/master/2D_example.ipynb) or [3D example](http://nbviewer.jupyter.org/github/Slowpuncher24/mlhiphy_v2/blob/master/3D_example.ipynb). * The inference of up to four hidden parameters in three dimensions, as opposed to only one hidden parameter in two dimensions in mlhiphy (respectively counting the temporal dimension as one). See: [advection-diffusion](http://nbviewer.jupyter.org/github/Slowpuncher24/mlhiphy_v2/blob/master/advection_diffusion.ipynb). * An alternative implementation of the negative log-likelihood for the noise-free case, where we can optimize over one hyperparameter less (the signal variance can be written in terms of other values). See: [without noise](http://nbviewer.jupyter.org/github/Slowpuncher24/mlhiphy_v2/blob/master/without_noise.ipynb). | 991 |
Snoopy666/Extractive-User-Review-Summaruzation-using-LSTM | ['text summarization', 'document summarization', 'extractive summarization'] | ['SummaRuNNer: A Recurrent Neural Network based Sequence Model for Extractive Summarization of Documents'] | train.py data_reader.py eval.py eval_v2.py model_v2.py predict.py data_reader_v2.py model.py load_test_v2 load_data_excel_v2 load_embed load_from_fileList load_test_v3 get_embed Vocab load_data load_test_v1 load_data_excel_v3 load_data_excel_v1 DataReader DataReader_v2 load_test load_data_excel get_embed Vocab DataReader SummaRuNNer SummaRuNNer train join defaultdict print size len write feed Vocab zeros listdir enumerate open str defaultdict replace print size tolist len feed split Vocab read_excel zip open zeros toSimplified max append enumerate read_excel max open str defaultdict tolist Vocab append replace size zip enumerate remove print feed split zeros toSimplified len str defaultdict replace print size tolist len feed split Vocab read_excel zip open zeros toSimplified max append enumerate str defaultdict replace print size tolist len feed split Vocab read_excel zip open zeros max append enumerate str defaultdict replace print size tolist len feed split Vocab read_excel zip open zeros toSimplified max append enumerate read_excel max open str defaultdict tolist Vocab append vocab replace size zip enumerate load print feed split zeros len defaultdict replace delimiter print size readlines len feed Vocab append zeros max enumerate split load items _token2index sorted list print emb_path tolist float32 append array load items _token2index sorted list str print tolist write close float32 append array open read_excel max open str defaultdict tolist Vocab append vocab replace size zip enumerate load print now feed split zeros toSimplified len read_excel max open str defaultdict tolist Vocab append range vocab asarray replace size zip enumerate load print feed split zeros len str batch_size print train_data_file load_data_excel now get_embed DataReader | # Extractive-Summaruzation-using-LSTM Our implementation is based on Tensorflow implementation of SummaRuNNer https://github.com/pocheyeniu/SummaRuNNer and https://arxiv.org/pdf/1611.04230.pdf. We provide two data readers, evaluations, models for different data preprocessing & evaluation strategies and model architectures. We also provides a predict.py file for large-scale batch predicting on distributed system in the inference process. | 992 |
SocieteGenevoiseDonnees/FiltersStylesDomains | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | utils.py heardingcats.py heard log_progress list concatenate glob print copy set choice shape log_progress int VBox format display IntProgress HTML enumerate len | # Read list [](https://gitter.im/societegenevoisedonnees/FiltersStylesDomains?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) * [visualizing parts of convolutional neural networks using keras and cats](https://hackernoon.com/visualizing-parts-of-convolutional-neural-networks-using-keras-and-cats-5cc01b214e59) * [Keras documentation](https://keras.io/) * [Monkey dataset from Kaggle](https://www.kaggle.com/slothkong/10-monkey-species) * A Neural Algorithm of Artistic Style: [arXiv:1508.06576](https://arxiv.org/abs/1508.06576) * [VGG Convolutional Neural Networks Practical](http://www.robots.ox.ac.uk/~vgg/practicals/cnn/index.html) * Vince's post on style transfer using Torch (not pyTorch): [TorchDeepArt](https://vincecr0ft.github.io/TorchDeepArt.html) | 993 |
SoftwareGift/FeatheNets_Face-Anti-spoofing-Attack-Detection-Challenge-CVPR2019 | ['face anti spoofing'] | ['FeatherNets: Convolutional Neural Networks as Light as Feather for Face Anti-spoofing'] | model_onnx2IR.py gen_final_submission.py models/mobilenetv2.py tools/benchmark/reporter.py tools/__init__.py models/net_factory.py models/__init__.py read_data.py models/fish_block.py Feather_pytorch_2_onnx.py tools/benchmark/stat_tree.py utils/__init__.py models/fishnet.py tools/benchmark/compute_speed.py tools/benchmark/__init__.py tools/benchmark/compute_memory.py utils/data_aug.py main.py tools/benchmark/compute_flops.py models/MobileLiteNet.py losses.py utils/profile.py tools/gluon2pytorch.py data/fileList.py roc.py models/FeatherNet.py tools/benchmark/statistics.py tools/benchmark/compute_madd.py tools/benchmark/model_hook.py splitscore Average fecth_ensembled_score get_best_threshold num_err FocalLoss TalyorCrossEntroyLoss accuracy validate AverageMeter accuracy save_checkpoint adjust_learning_rate main train CASIA cal_metric conv_1x1_bn InvertedResidual conv_bn FeatherNet FeatherNetB FeatherNetA SELayer FishNet Fish fish Bottleneck MobileLiteNet102 MobileLiteNet54_se InvertedResidual MobileLiteNet105_se MobileLiteNet153 MobileLiteNet MobileLiteNet54 MobileLiteNet156_se SELayer conv_1x1_bn load_weight InvertedResidual conv_bn moilenetv2 MobileNetV2 fishnet99 fishnet150 compute_Pool2d_flops compute_Upsample_flops compute_flops compute_BatchNorm2d_flops compute_ReLU_flops compute_Linear_flops compute_Conv2d_flops compute_Conv2d_madd compute_BatchNorm2d_madd compute_ConvTranspose2d_madd compute_AvgPool2d_madd compute_Softmax_madd compute_ReLU_madd compute_Linear_madd compute_madd compute_MaxPool2d_madd compute_Bilinear_madd compute_BatchNorm2d_memory compute_Conv2d_memory compute_Linear_memory compute_memory compute_Pool2d_memory num_params compute_ReLU_memory compute_PReLU_memory compute_speed ModelHook round_value report_format convert_leaf_modules_to_stat_tree get_parent_node ModelStat stat StatNode StatTree ColorAugmentation calc_flops count_params append float split open Average min append max range len print format range len fecth_ensembled_score range num_err topk isinstance size t eq mul_ expand_as append sum max validate CASIA tuple speed SGD input_size DataParallel DataLoader adjust_learning_rate save_checkpoint device max seed getcwd epochs count_params load_state_dict parse_args to sum FocalLoss range manual_seed_all format save_path Compose stat start_epoch lr resume Normalize manual_seed isfile random_seed float setattr mkdir image_size load items evaluate print parameters summary train compute_speed update data time criterion model backward AverageMeter clone accuracy zero_grad to step enumerate time cal_metric AverageMeter eval ravel save every_decay param_groups lr show items function arange plot reshape squeeze roc_curve interp1d argwhere float abs roc_auc_score enumerate FeatherNet FeatherNet MobileLiteNet MobileLiteNet MobileLiteNet MobileLiteNet MobileLiteNet MobileLiteNet load items update state_dict load_weight load_state_dict MobileNetV2 load DataParallel fish load_state_dict load DataParallel fish load_state_dict format isinstance print Conv2d Upsample BatchNorm2d __name__ Linear kernel_size groups shape affine prod kernel_size groups kernel_size groups kernel_size isinstance kernel_size isinstance format Softmax isinstance AvgPool2d print MaxPool2d Conv2d Bilinear BatchNorm2d ConvTranspose2d __name__ Linear PReLU format isinstance print Conv2d BatchNorm2d __name__ Linear numel num_params numel num_params numel size numel num_params numel numel time model randn float set_device synchronize eval info to range sum list format fillna str parameter_quantity ConvFlops name duration Series inference_memory apply MAdd append DataFrame Flops join find_child_index split range len join get_parent_node items add_child tolist len StatNode range split ModelStat show_report FloatTensor Variable model print rand foo sum cuda format print calc_flops parameters filter sum | ## FeatherNets for [Face Anti-spoofing Attack Detection Challenge@CVPR2019](https://competitions.codalab.org/competitions/20853#results)[1] # Params only 0.35M!! FLOPs 80M !! # Results on the validation set |model name | ACER|TPR@FPR=10E-2|TPR@FPR=10E-3|FP|FN|epoch|params|FLOPs| | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | |FishNet150| 0.00144|0.999668|0.998330|19|0|27|24.96M|6452.72M| |FishNet150| 0.00181|1.0|0.9996|24|0|52|24.96M|6452.72M| |FishNet150| 0.00496|0.998664|0.990648|48|8|16|24.96M|6452.72M| |MobileNet v2|0.00228|0.9996|0.9993|28|1|5|2.23M|306.17M |MobileNet v2|0.00387|0.999433|0.997662|49|1|6|2.23M|306.17M | 994 |
Soldelli/gait_anomaly_detection | ['anomaly detection'] | ['Seq2Seq RNN based Gait Anomaly Detection from Smartphone Acquired Multimodal Motion Data'] | Seq2Seq-gait-analysis/conv_classifier_eval.py pre-processing/utils.py Seq2Seq-gait-analysis/sequence_classification_app.py Seq2Seq-gait-analysis/data_utils.py Seq2Seq-gait-analysis/bidirectional_autoencoder.py pre-processing/camera_tracking.py pre-processing/preprocessing.py Seq2Seq-gait-analysis/svm_classifier.py pre-processing/config.py featureTracking decomposeR pointNorm postProc calibration checkVideoFrames camera_tracking preprocessing_y write_data psd_visualization data_visualization read_data stance_swing_time plot_preprocessed_examples video_flattening interpolation butterworth smooth cycles_extraction data_inspection plot_raw_data setup_folders psd_comparison step_vel plot_walk_statistics build_output_layer training_step build_bw_cell build_training_iterator build_fw_cell training_loss build_rnn_autoencoder build_test_iterator add_relu_activation build_output_layer reshape_state build_loss build_iterators build_conv_layers flatten_layer training_step add_dropout write_np_dataset_to_disk build_np_dataset concat_classification_datasets_with_labels load_original_data build_iterators compute_accuracy create_restore_object restore_model load_fit_model shape_data build_classifier fit_svm save_fit_model get_classes_from_labels load_data evaluate_model calcOpticalFlowPyrLK range delete len getOptimalNewCameraMatrix warpAffine undistort getRotationMatrix2D pow sqrt atan2 dot read COLOR_BGR2GRAY append range cvtColor zeros min array max VideoCapture featureTracking CAP_PROP_FRAME_COUNT subplot str findEssentialMat decomposeR findFundamentalMat waitKey ylabel imshow recoverPose title append range get format plot COLOR_BGR2GRAY glob close checkVideoFrames set SIFT_create copy detect int time read postProc print pt calibration xlabel pause dict dot savemat figure zeros array read_csv cvtColor len gyro_timestamp gyro_x flatten acc_z gyro_z acc_x append magn_z format magn_timestamp glob plot_raw_data acc_timestamp frame_timestamp magn_x gyro_y magn_y acc_y array read_csv str len range makedirs show plot IClr xlabel interpolation tolist astype cycles_extraction mean ICFC figure double array range len format plot xlabel xscale margins ylabel freqs grid axvline close log10 savefig xlim abs int min mean linspace interp round max butter interpolation butterworth min filtfilt vstack linspace interp append max range len irfft max arange fabs reshape interpolation tolist min mean sqrt pow space zeros float double array range len format xlabel interpolation grid semilogx ylabel axis close welch log10 savefig legend makedirs subplot format plot xlabel butter len semilogx ylabel axis close welch filtfilt title log10 linspace savefig legend makedirs str time format data_visualization int print len linspace zeros range makedirs int subplot plot len axvline ylabel close ylim savefig linspace interp zeros xlim range makedirs subplot format plot xlabel close ylabel title savefig linspace len format plot makedirs close title savefig linspace len data_visualization str format int print len linspace range makedirs subplot plot xlabel axis ylabel close title savefig makedirs load from_tensor_slices make_initializable_iterator print cast batch len load from_tensor_slices make_initializable_iterator cast batch len MultiRNNCell MultiRNNCell bidirectional_dynamic_rnn mean_squared_error trainable_variables gradients clip_by_global_norm GradientDescentOptimizer apply_gradients zip exponential_decay load from_tensor_slices make_initializable_iterator classifier_l_fname batch_size classifier_ds_fname cast batch len int squeeze log2 stack floor unstack conv2d max_pool get_variable flatten dense softmax_cross_entropy_with_logits reduce_mean minimize reshape zeros permutation concatenate print save restore sum argmax len load classifier_ds_fname classifier_l_fname reshape dump makedirs sum predict len | # Seq2Seq RNN based Gait Anomaly Detection from Smartphone Acquired Multimodal Motion Data Deep learning approach to anomaly detection in gait data acquired through smartphone's multimodal sensors. The proposed architecture takes advantage of RNN and CNN layers to implement a Sequence-to-Sequence feature extractor and a Convolutional classifier, check the [paper](https://arxiv.org/abs/1911.08608) for more details.</br> <p align="center"> <img src="https://github.com/Soldelli/gait_anomaly_detection/blob/master/ALV/images/teaser_gait_analysis.png"> </p> ## Welcome If you find any piece of code valuable for your research please cite this work:</br> [](https://doi.org/10.5281/zenodo.2648530)</br> And don't forget to give us a :star: in the GitHub banner :wink:. | 995 |
SotirisKot/Content-Aware-N2V | ['link prediction', 'network embedding'] | ['Embedding Biomedical Ontologies by Jointly Encoding Network Structure and Textual Node Descriptors'] | train_node2vec.py experiments.py dataloader.py node2vec.py models.py config.py utils.py create_dataset.py create_train_test_splits_hard return_parents create_train_test_splits_easy read_graph parse_args Node2VecDataset learn_embeddings clean_dictionary get_index phr2idx read_graph main parse_args tokenize unsort GRUEncoder SelfAttention AverageNode2Vec Graph alias_draw alias_setup clean_dictionary phr2idx get_average_embedding load_embeddings Node2Vec create_confusion_matrix save_checkpoint print_params get_edge_embeddings init_logger create_attention_weights_for_batch create_pooling_weights_for_batch get_index cos_sim tokenize get_cos_embedding plot_attention clean_dictionary phr2idx Utils get_index tokenize set_defaults add_argument ArgumentParser connected_component_subgraphs to_undirected print weighted edges nodes_with_selfloops read_edgelist max remove_edge number_of_nodes write_edgelist seed list len _plain_bfs add number_connected_components add_edge format shuffle set number_of_edges int remove join print tqdm randint remove_edge makedirs seed number_connected_components list format number_of_nodes successors print return_parents makedirs shuffle _plain_bfs set add tqdm number_of_edges to_directed append len print Node2Vec wv eval train items tokenize checkpoint_dir number_of_nodes test_neg num_walks open p output_file directed read_graph train_neg q number_connected_components checkpoint_name format Graph number_of_edges simulate_walks train_pos resume_training load embeddings_dir test_pos learn_embeddings print evaluate walk_length preprocess_transition_probs train len sort index_select pop len append zeros enumerate int rand floor len save print parameters size join setFormatter getLogger addHandler makedirs removeHandler Formatter setLevel INFO FileHandler multiply get_average_embedding append array enumerate len divide add float enumerate append array cos_sim get_average_embedding join list sorted format str tolist Counter append float sum keys values enumerate append join enumerate str join str OrderedDict append enumerate open abspath zip split dot norm | # Content-Aware Node2vec Source code and datasets of BioNLP 2019 paper: "Embedding Biomedical Ontologies by Jointly Encoding Network Structure and Textual Node Descriptors" ## Datasets The folder "datasets" contains the edgelists of the two datasets, denoted Part-of and Is-a, used in Content-Aware Node2vec. For each dataset, exist some dictionaries in the folder data_utilities. For example for the Is-a dataset: * isa_phrase_dic.p (mapping between nodes and textual descriptors--the keys are the textual descriptors -- you must use the reversed_dic) * isa_phrase_vocab.p (the textual descriptors associated with each node) * isa_reversed_dic.p (the reversed dictionary of isa_phrase_dic.p) ## Run | 996 |
Soumyabrata/nighttime-imaging | ['semantic segmentation'] | ['Nighttime sky/cloud image segmentation'] | scripts/internal_calibration.py scripts/undistort_WAHRSIS_imgs.py world2cam cam2world cuda_interpolate cuda_interpolate3D undistortCC tan arctan2 inv pi dot sqrt tile sin power arcsin array spacing arccos arctan2 arctan cos polyval pi sqrt sin power array radians T arange cuda_interpolate reshape astype ev cuda_interpolate3D dot shape RectBivariateSpline tile meshgrid world2cam ravel array clip set_filter_mode divmod interpolation astype matrix_to_texref LINEAR get_texref Out In SourceModule get_function set_filter_mode divmod interpolation astype matrix_to_texref LINEAR get_texref Out In SourceModule get_function | # Nighttime sky/cloud image segmentation With the spirit of reproducible research, this repository contains all the codes required to produce the results in the manuscript: S. Dev, F. M. Savoy, Y. H. Lee and S. Winkler, Nighttime sky/cloud image segmentation, *Proc. IEEE International Conference on Image Processing (ICIP)*, 2017. Please cite the above paper if you intend to use whole/part of the code. This code is only for academic and research purposes. ## Code Organization The codes are written in python and MATLAB. ### Dataset The nighttime image segmentation dataset can be downloaded from [this](http://vintage.winklerbros.net/swinseg.html) link. A few sample images can be found in the folder `./images`. ### Core functionality * `color16Norm.m` Generates the 16 color channels in the form of a MATLAB struct. All values are normalized. * `color16_struct.m` Generates the 16 color channels in the form of a MATLAB struct. | 997 |
Spenhouet/automated-deep-photo-style-transfer | ['style transfer', 'semantic segmentation'] | ['Automated Deep Photo Style Transfer'] | components/VGG19/model.py components/path.py components/matting.py components/PSPNet/network.py components/PSPNet/model.py components/segmentation.py components/semantic_merge.py components/NIMA/model.py style_transfer.py change_filename compute_nima_loss calculate_gram_matrix calculate_layer_content_loss calculate_layer_style_loss adam_variables_initializer calculate_photorealism_regularization write_metadata load_image save_image style_transfer compute_matting_laplacian load load_img read_label_colors segmentation_pred compute_segmentation preprocess extract_segmentation_masks merge_difference merge_segments mask_for_tf reduce_dict get_unique_colors_from_image annotate_label_similarity color_tuples_to_label_list_tuples replace_colors_in_dict get_labels_to_compare postprocess get_nima_model preprocess load_color_label_dict PSPNet50 layer Network postprocess VGG19ConvSub preprocess load_weights print VGG19ConvSub float32 placeholder preprocess load_weights _get_beta_accumulators list extend get_nima_model output mean_score identity value TensorShape calculate_gram_matrix float32 square reduce_sum reduce_mean zip append resize_masks squared_difference compute_matting_laplacian sparse_tensor_dense_matmul reshape transpose matmul reduce_sum unstack append expand_dims reshape expand_dims convert array fromarray uint8 clip save splitext join data tocoo broadcast_to SparseTensor to_float list transpose identity matmul shape range grey_erosion coo_matrix zip T print reshape inv extend repeat zeros ravel array print list constant one_hot reshape resize_bilinear maximum crop_to_bounding_box matmul shape preprocess argmax keys print format exit pad_to_bounding_box concat cast expand_dims split shape loadmat print restore format items list print list replace_colors_in_dict difference extract_segmentation_masks combinations list connected_components merge_difference print load_color_label_dict from_edgelist intersection keys color_tuples_to_label_list_tuples replace_colors_in_dict annotate_label_similarity items list where shape zeros shape reshape unique get_unique_colors_from_image join output Model InceptionResNetV2 load_weights input load join close open load join | # Automated Deep Photo Style Transfer This repository holds a TensorFlow implementation for the paper [Automated Deep Photo Style Transfer](https://arxiv.org/abs/1901.03915). At its core this is a TensorFlow based implementation of the paper [Deep Photo Style Transfer](https://arxiv.org/abs/1703.07511). One of the main contributions of “Automated Deep Photo Style Transfer” is the automatic segmentation of input images and a semantic grouping thereof. Another contribution of this is the optimization of the transfer image by improving the aesthetics of the image with the use of [Neural Image Assessment (NIMA)](https://arxiv.org/abs/1709.05424). ## Examples Given a content and style image, automatically a segmentation is created and semantically grouped to produce a transfer image in the size of the content image by using the [Deep Photo Style Transfer](https://arxiv.org/abs/1703.07511): <p align="center"> <img src="./examples/teaser.jpg" width="870" alt="Overview"/> </p> Here are some example results (from left to right are the content image, the resulting transfer image and the style image): | 998 |
Spico197/NYT-H | ['relation extraction'] | ['Towards Accurate and Consistent Evaluation: A Dataset for Distantly-Supervised Relation Extraction'] | helper.py plot_prc.py run.py config.py evaluate.py model.py Config Threshold evaluate_nyth evaluate_crcnn compute_metrics_nyth compute_magt compute_dsgt evaluate_bag2sent WordEmbeddingLoader NythBagDataset set_seed NythDataset Bag2SentDataProcessor BagDataProcessor NythBag2SentDataset SentDataProcessor load_line_json CRCNN PCNN_ATT CNN_ONE SentCNN ATT_BLSTM CNN_ATT RankingLoss PCNN PCNN_ONE get_logdir_suffix train get_output_dirs_ready items list recall_score classification_report set OrderedDict mean precision_score append f1_score sum len join list dict output_dir info makedirs join list dict output_dir info makedirs join list dict output_dir info makedirs items list format print classification_report set append sum range len int list format print recall_score add dict set precision_score append f1_score sum len seed str manual_seed gethostname strftime join dirname abspath output_dir makedirs model tuple evaluate_nyth evaluate_crcnn zero_grad class_num output_dir device save tensor max seed list tb_logging_step set_seed Adam get_logdir_suffix to add_pr_curve CrossEntropyLoss train_loader state_dict SummaryWriter epoch close softmax RankingLoss info save_best_model trange enumerate join do_eval_while_train items criterion backward makedirs tqdm parameters step add_scalar | # NYT-H Datasets and codes for our COLING2020 paper: Towards Accurate and Consistent Evaluation: A Dataset for Distantly-Supervised Relation Extraction [[pdf]](https://www.aclweb.org/anthology/2020.coling-main.566.pdf) ## Dependencies - python == 3.7.7 - torch == 1.5.1 - numpy == 1.18.5 - sklearn == 0.21.3 - tqdm == 4.36.1 - matplotlib == 3.2.1 | 999 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.