repo
stringlengths 8
116
| tasks
stringlengths 8
117
| titles
stringlengths 17
302
| dependencies
stringlengths 5
372k
| readme
stringlengths 5
4.26k
| __index_level_0__
int64 0
4.36k
|
---|---|---|---|---|---|
Srijha09/Making-Images-Artsy-Neural-Style-Transfer | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | utils.py style_transfer.py app.py run.py home upload serve_image style_transfer run_model is_file_allowed image_loader StyleLoss Normalization gram_matrix imshow get_style_model_and_losses ContentLoss add_time run_style_transfer get_input_optimizer is_file_allowed print save filename add_time print run_model image_loader ToPILImage Compose clone imshow eval figure device is_available to run_style_transfer replace unsqueeze convert open str time squeeze clone close unloader title savefig t size mm view deepcopy children isinstance Sequential StyleLoss MaxPool2d add_module Conv2d len ReLU ContentLoss BatchNorm2d to range append detach LBFGS print clamp_ get_style_model_and_losses step get_input_optimizer | # Making-Images-Artsy-Neural-Style-Transfer The whole idea of editing images and making them more creative by merging with them other images lies in the principle of Neural Style Transfer. Neural style transfer is an optimization technique used to take three images, a content image, a style reference image (such as an artwork by a famous painter), and the input image you want to style — and blend them together such that the input image is transformed to look like the content image, but “painted” in the style of the style image. The principle is simple: we define two distances, one for the content (DC) and one for the style (DS). DC measures how different the content is between two images while DS measures how different the style is between two images. Then, we take a third image, the input, and transform it to minimize both its content-distance with the content-image and its style-distance with the style-image. Now we can import the necessary packages and begin the neural transfer. I have taken the help of Pytorch Neural Style Transfer: https://pytorch.org/tutorials/advanced/neural_style_tutorial.html and idea from the paper https://arxiv.org/abs/1508.06576 I have also incorporated a Flask Application deployed on Heroku Cloud.    |  | 1,000 |
Stanford-ILIAD/batch-active-preference-based-learning | ['active learning'] | ['Batch Active Preference-Based Learning of Reward Functions'] | input_sampler.py simulator.py dynamics.py demos.py lane.py utils_driving.py run.py feature.py world.py kmedoids.py utils.py trajectory.py algos.py visualize.py car.py run_optimizer.py simulation_utils.py sampling.py models.py boundary_medoids successive_elimination select_top_candidates generate_psi random func_psi func greedy medoids nonbatch Car batch nonbatch CarDynamics Dynamics feature speed control bounded_control Feature kMedoids Lane StraightLane LunarLander MountainCar Tosser Driver Swimmer Sampler perform_best get_feedback run_algo func create_env GymSimulation MujocoSimulation DrivingSimulation Simulation Trajectory extract vector randomize grad jacobian Maximizer shape hessian matrix NestedMaximizer scalar extract vector grad jacobian shape hessian matrix Visualizer irl_ground world2 world3 Object zero world_features world6 world1 world_test playground world0 world5 world4 World dot exp T sum list reshape range feed get_features zeros num_of_features array feed_size generate_psi fmin_l_bfgs_b feed_size load name argsort func_psi zeros num_of_features feed_size select_top_candidates select_top_candidates kMedoids pairwise_distances simplices select_top_candidates kMedoids pairwise_distances unique ConvexHull select_top_candidates reshape pairwise_distances where delete array uniform str norm format print get_feedback reshape exit run_algo num_of_features sample mean uniform append create_env Sampler range norm format print get_feedback reshape run_algo num_of_features sample mean uniform append create_env Sampler range list sort argmin shuffle where set add shape copy mean array_equal zip array range len watch feed get_features lower array print exit print exit set_ctrl array get_features watch set_ctrl inf ctrl_size lower fmin_l_bfgs_b range normal set_value isinstance concatenate shape isinstance concatenate CarDynamics StraightLane UserControlledCar Car append World CarDynamics StraightLane simple_reward World UserControlledCar f goal cars zip append SimpleOptimizerCar enumerate open CarDynamics StraightLane simple_reward UserControlledCar append SimpleOptimizerCar World CarDynamics StraightLane simple_reward bounds UserControlledCar NestedOptimizerCar append bounded_control World CarDynamics StraightLane simple_reward bounds UserControlledCar traj_h NestedOptimizerCar append bounded_control World CarDynamics StraightLane simple_reward bounds UserControlledCar traj_h NestedOptimizerCar append bounded_control World CarDynamics StraightLane simple_reward bounds UserControlledCar traj_h NestedOptimizerCar append bounded_control World CarDynamics StraightLane simple_reward bounds UserControlledCar traj_h NestedOptimizerCar append bounded_control World CarDynamics StraightLane simple_reward bounds UserControlledCar NestedOptimizerCar append bounded_control World CarDynamics asarray StraightLane simple_reward bounds NestedOptimizerCar append bounded_control SimpleOptimizerCar World CarDynamics StraightLane UserControlledCar Car append World | This code learns reward functions from human preferences in various tasks by actively generating batches of scenarios and querying a human expert. Companion code to [CoRL 2018 paper](https://arxiv.org/abs/1810.04303): E Bıyık, D Sadigh. **"[Batch Active Preference-Based Learning of Reward Functions](https://arxiv.org/abs/1810.04303)"**. *Conference on Robot Learning (CoRL)*, Zurich, Switzerland, Oct. 2018. ## Dependencies You need to have the following libraries with [Python3](http://www.python.org/downloads): - [MuJoCo 1.50](http://www.mujoco.org/index.html) - [NumPy](https://www.numpy.org/) - [OpenAI Gym](https://gym.openai.com) - [pyglet](https://bitbucket.org/pyglet/pyglet/wiki/Home) - PYMC | 1,001 |
StanfordAI4HI/ICLR2019_evaluating_discrete_temporal_structure | ['time series'] | ['Learning Procedural Abstractions and Evaluating Discrete Latent Temporal Structure'] | data_loaders.py metrics.py plot.py config.py viz.py evaluate.py utils.py load_bees_dataset load_mocap6_dataset evaluate_all_methods analyze_single_method analyze_best_runs_across_methods_for_metric_pair viz_best_runs_across_methods read_single_run generate_eval_dict restrict_eval_dict restrict_frame_to_metrics analyze_best_runs_across_method_pairs_by_metrics get_methods_sorted_by_best_runs_on_metric evaluate_a_prediction analyze_best_runs_across_methods_for_metric select_best_run_per_method_by_metric evaluate_single_method analyze_all_methods evaluate_single_run repeated_structure_score munkres_score segment_homogeneity_score dcg_score label_agnostic_segmentation_score segment_completeness_score temporal_structure_score_new segment_structure_score_new compute_HC_given_SG ndcg_score compute_HSC_given_SG transition_structure_score bar_plot_combined_2 setup_sns factor_plot_single_method factorplot_methods_grouped_by_metrics scatterplot_methods_varying_beta bar_plot_single_method factor_plot_combined bar_plot_combined barplot_methods_grouped_by_metrics heaviest_common_substring entropy heaviest_common_subsequence_with_alignment relabel_clustering_with_munkres_correspondences relabel_clustering get_segment_dict heaviest_common_subsequence viz_temporal_clusterings_with_segment_spacing viz_temporal_clusterings_by_segments viz_temporal_clusterings clear_labels _add_subplot append flatten loadmat transform concatenate glob len natsorted LabelEncoder scale append range fit repeated_structure_score transition_structure_score munkres_score normalized_mutual_info_score label_agnostic_segmentation_score adjusted_rand_score temporal_structure_score_new segment_structure_score_new homogeneity_completeness_v_measure generate_eval_dict load exists read_single_run evaluate_a_prediction write glob append evaluate_single_run evaluate_single_method DataFrame analyze_single_method print natsorted append DataFrame viz_temporal_clusterings_with_segment_spacing concatenate set viz_temporal_clusterings_by_segments relabel_clustering_with_munkres_correspondences viz_temporal_clusterings vstack append makedirs barplot_methods_grouped_by_metrics makedirs restrict_frame_to_metrics get_methods_sorted_by_best_runs_on_metric viz_best_runs_across_methods restrict_frame_to_metrics get_methods_sorted_by_best_runs_on_metric join makedirs factorplot_methods_grouped_by_metrics restrict_frame_to_metrics enumerate evaluate_all_methods analyze_best_runs_across_methods_for_metric_pair viz_best_runs_across_methods to_latex print makedirs map restrict_frame_to_metrics scatterplot_methods_varying_beta analyze_best_runs_across_method_pairs_by_metrics pivot_table analyze_best_runs_across_methods_for_metric select_best_run_per_method_by_metric round drop compute sum Munkres concatenate contingency_matrix float make_cost_matrix list heaviest_common_substring values heaviest_common_subsequence_with_alignment unique zip append sum heaviest_common_subsequence enumerate len entropy relabel_clustering get_segment_dict float len float get_segment_dict entropy len compute_HSC_given_SG relabel_clustering entropy compute_HSC_given_SG relabel_clustering entropy compute_HC_given_SG entropy compute_HC_given_SG compute_HSC_given_SG relabel_clustering entropy repeated_structure_score segment_structure_score_new log2 take arange len dcg_score set_style set_context set list plot yticks xlabel min close ylabel flatten ylim savefig Categorical figure append ndcg_score xticks max range len setup_sns list set_ylabels set_title suptitle set_xticklabels factorplot set_xlabel close flat savefig legend zip range set_ylim list remove arange set_xticklabels xlabel close ylabel ylim set_xticks title figure savefig barplot range list remove arange set_xticklabels xlabel close ylabel ylim set_xticks savefig figure barplot setup_sns list set_ylabels set_title set_xticklabels print factorplot set_yticks set_yticklabels set_xlabel close subplots_adjust flat figure savefig zip range set_ylim list remove arange set_xticklabels print xlabel close ylabel ylim set_xticks savefig figure barplot close tight_layout title ylim savefig figure barplot setup_sns list set_ylabels set_title set_xticklabels factorplot set_yticklabels set_yticks set_xlabel close subplots_adjust flat figure savefig zip range set_ylim float64 sum astype OrderedDict unique zip append enumerate append compute max Munkres contingency_matrix LabelEncoder make_cost_matrix fit_transform zeros abs array range len zeros array range len zeros array range len set_yticks set_xticks set_visible set_xticklabels imshow clear_labels add_subplot set_title len close tight_layout GridSpec savefig figure _add_subplot enumerate list sorted close extend GridSpec tight_layout get_segment_dict savefig figure _add_subplot array enumerate len list sorted close extend GridSpec tight_layout get_segment_dict savefig figure _add_subplot array enumerate len | # Evaluating Discrete Latent Temporal Structure This repository provides code to run experiments from, **Learning Procedural Abstractions and Evaluating Discrete Latent Temporal Structure** Karan Goel and Emma Brunskill _ICLR 2019_ Specifically, this repository reproduces experiments with the new evaluation criteria proposed in the paper. The results are reproduced from logged runs of the methods being compared, since running them from scratch can be very slow. #### Usage | 1,002 |
StanfordVL/arxivbot | ['few shot learning', 'program induction'] | ['Neural Task Programming: Learning to Generalize Across Hierarchical Tasks'] | forever.py bot.py format_arxiv summarize parse_direct_mention parse_arxiv parse_bot_commands handle_command summarizer get_stop_words document PlaintextParser list results print findall append summarize join title split parse_direct_mention search parse_arxiv api_call | # ArxivBot This is a Slack bot that posts arXiv summaries in a channel built based on this tutorial: https://www.fullstackpython.com/blog/build-first-slack-bot-python.html ## Quick start - Follow this [instruction](https://www.fullstackpython.com/blog/build-first-slack-bot-python.html) to create a slack bot (you are free to choose a name) and obtain the slack bot token. - Export the environment variable to the computer you want to deploy the bot ``` export SLACK_BOT_TOKEN=<your-slack-bot-token> ``` | 1,003 |
Star-Clouds/CenterFace | ['face detection'] | ['CenterFace: Joint Face Detection and Alignment Using Face as Point'] | prj-python/centerface.py prj-tensorrt/centerface.py prj-tensorrt/demo.py prj-python/demo.py CenterFace test_image_tensorrt camera test_widerface test_image CenterFace test_image_tensorrt VideoCapture read imshow rectangle CenterFace centerface release range circle waitKey imshow rectangle CenterFace centerface imread range circle waitKey imshow rectangle CenterFace centerface imread range circle join format print len write close CenterFace centerface open imread loadmat enumerate makedirs print len | ## CenterFace ### Introduce CenterFace(size of 7.3MB) is a practical anchor-free face detection and alignment method for edge devices.  ### Recent Update - `2019.09.13` CenterFace is released. ### Environment - OpenCV 4.1.0 - Numpy - Python3.6+ | 1,004 |
Statwolf/LocalLinearForest | ['causal inference'] | ['Local Linear Forests'] | modules/local_linear_forest.py modules/test_local_linear_forest.py LocalLinearForestRegressor TestLiuk TestLocalLinearForest | # LocalLinearForest Python implementation of Local Linear Forest (https://arxiv.org/pdf/1807.11408.pdf) | 1,005 |
StephenBo-China/DIEN | ['click through rate prediction'] | ['Deep Interest Evolution Network for Click-Through Rate Prediction'] | alibaba_data_reader.py activations.py layers.py main.py model.py loss.py Dice get_embedding_count_dict get_embedding_features_list get_sequence_data get_normal_data get_data convert_tensor get_embedding_dim_dict get_embedding_count get_user_behavior_features get_batch_data AUGRU GRU_GATES AuxLayer main DIEN dict get_embedding_count dict fillna read_csv list len map append zeros array values split sample get_sequence_data get_normal_data DIEN get_embedding_count_dict print get_embedding_features_list get_data get_embedding_dim_dict get_user_behavior_features get_batch_data | # DIEN_Final 本项目使用tensorflow2.0复现DIEN。 论文链接: https://arxiv.org/pdf/1809.03672.pdf 数据集使用阿里数据集测试模型代码, 数据集链接: https://tianchi.aliyun.com/dataset/dataDetail?dataId=56 # DIEN调用方法: ## 0. 简介: DIEN的输入特征中主要包含三个部分特征: 用户历史行为序列, 目标商品特征, 用户画像特征。 用户历史行为序列需包含点击序列与非点击序列。 请按如下1~2方法处理输入特征。 ## 1. 初始化: | 1,006 |
StephenBo-China/DIEN-DIN | ['click through rate prediction'] | ['Deep Interest Network for Click-Through Rate Prediction', 'Deep Interest Evolution Network for Click-Through Rate Prediction'] | alibaba_data_reader.py activations.py layers.py utils.py main.py model.py loss.py dice Dice get_embedding_count_dict get_test_data get_embedding_features_list get_sequence_data get_normal_data get_data convert_tensor get_embedding_dim_dict get_batch_data get_embedding_count get_user_behavior_features get_length AUGRU GRU_GATES attention AuxLayer main train_one_step get_train_data DIEN DIN make_test_loss_dir make_train_loss_dir get_file_name get_input_dim add_loss concat_features mkdir dict get_embedding_count dict fillna read_csv list len map append zeros array values split list len map append values split sample get_sequence_data get_length get_normal_data get_sequence_data head get_length get_normal_data trainable_variables clip_by_global_norm gradient loss_metric apply_gradients zip int DIEN train_one_step get_embedding_count_dict get_train_data get_embedding_features_list len Adam get_data get_embedding_dim_dict Sum get_user_behavior_features create_file_writer range AUC get_batch_data localtime strftime join close write open join close write open join list write close append open append makedirs | # DIEN-DIN 本项目使用tensorflow2.0复现阿里兴趣排序模型DIEN与DIN。 DIN论文链接: https://arxiv.org/pdf/1706.06978.pdf DIEN论文链接: https://arxiv.org/pdf/1809.03672.pdf 数据集使用阿里数据集测试模型代码, 数据集链接: https://tianchi.aliyun.com/dataset/dataDetail?dataId=56 # 调用方法: ## 0. 简介: DIEN的输入特征中主要包含三个部分特征: 用户历史行为序列, 目标商品特征, 用户画像特征。 用户历史行为序列需包含点击序列与非点击序列。 请按如下1~2方法处理输入特征。 | 1,007 |
StephenBo-China/recommendation_system_sort_model | ['click through rate prediction'] | ['Deep Interest Network for Click-Through Rate Prediction', 'Deep Interest Evolution Network for Click-Through Rate Prediction'] | layers/Dice.py layers/AUGRU.py model/DIEN.py layers/attention.py layers/AuxLayer.py model/DIN.py utils.py data_reader/alibaba_data_reader.py make_test_loss_dir make_train_loss_dir get_file_name get_input_dim add_loss concat_features mkdir get_embedding_count_dict get_test_data get_embedding_features_list get_sequence_data get_normal_data get_data convert_tensor get_embedding_dim_dict get_batch_data get_embedding_count get_user_behavior_features get_length attention AUGRU GRU_GATES attention AuxLayer dice Dice DIEN DIN localtime strftime join close write open join close write open join list write close append open append makedirs dict get_embedding_count dict fillna read_csv list len map append zeros array values split list len map append values split sample get_sequence_data get_length get_normal_data get_sequence_data head get_length get_normal_data | # 本项目将使用tensorflow2.0复现现在推荐系统中的主流排序模型 ## DIN: 论文链接: https://arxiv.org/pdf/1706.06978.pdf 数据集使用阿里数据集测试模型代码, 数据集链接: https://tianchi.aliyun.com/dataset/dataDetail?dataId=56 ### DIN调用方法: #### 0. 简介: DIN的输入特征中主要包含三个部分特征: 用户历史行为序列, 目标商品特征, 用户画像特征。 用户历史行为序列需包含点击序列与非点击序列。 请按如下1~2方法处理输入特征。 #### 1. 初始化: | 1,008 |
StevenGrove/TreeFilter-Torch | ['semantic segmentation'] | ['Learnable Tree Filter for Structure-preserving Feature Transform'] | furnace/tools/benchmark/__init__.py furnace/legacy/sync_bn/comm.py furnace/engine/engine.py furnace/seg_opr/sgd.py model/cityscapes/cityscapes.fcn_32d.R101_v1c.extra/train.py furnace/base_model/xception.py model/voc/voc.fcn_32d.R101_v1c.extra/network.py model/voc/voc.fcn_32d.R101_v1c.extra.tree_filter.further_finetune/network.py model/voc/voc.fcn_32d.R50_v1c/config.py furnace/tools/benchmark/statistics.py model/cityscapes/cityscapes.fcn_32d.R101_v1c.extra/network.py furnace/datasets/ade/__init__.py furnace/datasets/__init__.py model/voc/voc.fcn_32d.R101_v1c.extra.tree_filter.further_finetune/train.py furnace/kernels/lib_tree_filter/functions/mst.py model/voc/voc.fcn_32d.R101_v1c.extra.tree_filter/network.py model/cityscapes/cityscapes.fcn_32d.R101_v1c.extra.tree_filter/train.py furnace/seg_opr/__init__.py furnace/legacy/sync_bn/__init__.py model/cityscapes/cityscapes.fcn_32d.R101_v1c.extra/eval.py furnace/kernels/lib_tree_filter/functions/bfs.py model/voc/voc.fcn_32d.R101_v1c.extra.tree_filter.further_finetune/eval.py furnace/legacy/sync_bn/functions.py furnace/base_model/resnet.py furnace/tools/gluon2pytorch.py furnace/utils/init_func.py furnace/legacy/sync_bn/parallel.py furnace/datasets/BaseDataset.py furnace/datasets/pcontext/pcontext.py furnace/utils/img_utils.py model/voc/voc.fcn_32d.R101_v1c.extra.tree_filter/dataloader.py furnace/legacy/sync_bn/src/gpu/setup.py furnace/engine/version.py model/voc/voc.fcn_32d.R50_v1c/dataloader.py model/cityscapes/cityscapes.fcn_32d.R101_v1c.extra.tree_filter/network.py model/voc/voc.fcn_32d.R101_v1c.extra.tree_filter.further_finetune/config.py furnace/legacy/parallel/parallel_apply.py model/voc/voc.fcn_32d.R101_v1c.extra.tree_filter/eval.py model/cityscapes/cityscapes.fcn_32d.R101_v1c.extra/config.py furnace/tools/benchmark/model_hook.py model/voc/voc.fcn_32d.R101_v1c.extra/config.py furnace/utils/pyt_utils.py furnace/legacy/sync_bn/src/cpu/setup.py furnace/tools/benchmark/compute_memory.py furnace/seg_opr/metric.py furnace/legacy/sync_bn/src/__init__.py furnace/datasets/camvid/camvid.py model/voc/voc.fcn_32d.R50_v1c/eval.py furnace/datasets/voc/voc.py furnace/kernels/lib_tree_filter/setup.py furnace/utils/visualize.py furnace/datasets/ade/ade.py furnace/tools/__init__.py model/voc/voc.fcn_32d.R50_v1c/train.py furnace/tools/benchmark/compute_speed.py furnace/tools/benchmark/compute_flops.py furnace/datasets/voc/__init__.py furnace/seg_opr/seg_oprs.py model/voc/voc.fcn_32d.R101_v1c.extra.tree_filter.further_finetune/dataloader.py furnace/datasets/cityscapes/__init__.py furnace/datasets/pcontext/__init__.py furnace/tools/benchmark/stat_tree.py model/voc/voc.fcn_32d.R101_v1c.extra/eval.py furnace/base_model/__init__.py furnace/datasets/camvid/__init__.py model/voc/voc.fcn_32d.R101_v1c.extra/train.py model/voc/voc.fcn_32d.R50_v1c/network.py furnace/engine/lr_policy.py model/voc/voc.fcn_32d.R50_v1c.tree_filter/train.py furnace/tools/benchmark/reporter.py model/cityscapes/cityscapes.fcn_32d.R101_v1c.extra.tree_filter/dataloader.py model/voc/voc.fcn_32d.R50_v1c.tree_filter/dataloader.py model/voc/voc.fcn_32d.R50_v1c.tree_filter/eval.py furnace/kernels/lib_tree_filter/modules/tree_filter.py model/cityscapes/cityscapes.fcn_32d.R101_v1c.extra.tree_filter/config.py furnace/engine/logger.py furnace/seg_opr/loss_opr.py furnace/legacy/sync_bn/parallel_apply.py model/cityscapes/cityscapes.fcn_32d.R101_v1c.extra/dataloader.py furnace/kernels/lib_tree_filter/functions/refine.py model/voc/voc.fcn_32d.R50_v1c.tree_filter/config.py model/voc/voc.fcn_32d.R50_v1c.tree_filter/network.py model/voc/voc.fcn_32d.R101_v1c.extra.tree_filter/config.py furnace/datasets/cityscapes/cityscapes.py furnace/engine/evaluator.py model/voc/voc.fcn_32d.R101_v1c.extra.tree_filter/train.py furnace/legacy/sync_bn/syncbn.py model/cityscapes/cityscapes.fcn_32d.R101_v1c.extra.tree_filter/eval.py furnace/legacy/eval_methods.py model/voc/voc.fcn_32d.R101_v1c.extra/dataloader.py furnace/tools/benchmark/compute_madd.py ResBlock ResNet resnet50 Bottleneck resnet152 conv3x3 resnet34 resnet18 BasicBlock resnet101 xception39 Block Xception SeparableConvBnRelu BaseDataset ADE CamVid Cityscapes PascalContext VOC State Engine Evaluator get_logger LogFormatter MultiStageLR BaseLR LinearIncreaseLR PolyLR _BFS _MST _Refine MinimumSpanningTree TreeFilter2D val_func_process whole_eval scale_process pre_img sliding_eval get_a_var parallel_apply SyncMaster FutureResult SlavePipe _batchnormtrain batchnormtrain _sum_square sum_square CallbackContext allreduce AllReduce DataParallelModel execute_replication_callbacks Reduce patch_replication_callback get_a_var parallel_apply BatchNorm3d SharedTensor _SyncBatchNorm BatchNorm1d BatchNorm2d SigmoidFocalLoss ProbOhemCrossEntropy2d hist_info pixelAccuracy meanIoU accuracy intersectionAndUnion compute_score mean_pixel_accuracy ChannelAttention RefineResidual AttentionRefinement FeatureFusion ConvBnRelu BNRefine SeparableConvBnRelu SELayer GlobalAvgPool2d StandardSGD compute_Pool2d_flops compute_Upsample_flops compute_flops compute_BatchNorm2d_flops compute_ReLU_flops compute_Linear_flops compute_Conv2d_flops compute_Conv2d_madd compute_BatchNorm2d_madd compute_ConvTranspose2d_madd compute_AvgPool2d_madd compute_Softmax_madd compute_ReLU_madd compute_Linear_madd compute_madd compute_MaxPool2d_madd compute_Bilinear_madd compute_BatchNorm2d_memory compute_Conv2d_memory compute_Linear_memory compute_memory compute_Pool2d_memory num_params compute_ReLU_memory compute_PReLU_memory compute_speed ModelHook round_value report_format convert_leaf_modules_to_stat_tree get_parent_node ModelStat stat StatNode StatTree random_crop_pad_to_shape random_scale_with_length random_crop resize_ensure_shortest_edge random_gaussian_blur pad_image_to_shape center_crop random_scale random_rotation get_2dshape pad_image_size_to_multiples_of normalize generate_random_crop_pos random_mirror __init_weight group_weight init_weight load_model _dbg_interactive parse_devices ensure_dir all_reduce_tensor extant_file link_file reduce_tensor show_img show_prediction set_img_color print_iou get_ade_colors get_colors open_tensorboard add_path get_train_loader TrainPre SegEvaluator PredictHead Network open_tensorboard add_path get_train_loader TrainPre SegEvaluator PredictHead Network open_tensorboard add_path get_train_loader TrainPre SegEvaluator PredictHead Network open_tensorboard add_path get_train_loader TrainPre SegEvaluator PredictHead Network open_tensorboard add_path get_train_loader TrainPre SegEvaluator PredictHead Network open_tensorboard add_path get_train_loader TrainPre SegEvaluator PredictHead Network open_tensorboard add_path get_train_loader TrainPre SegEvaluator PredictHead Network ResNet load_model ResNet load_model ResNet load_model ResNet load_model ResNet load_model Xception load_model setFormatter getLogger addHandler formatter StreamHandler ensure_dir setLevel INFO FileHandler val_func_process permute resize pre_img zeros argmax shape argmax zeros resize int pad_image_to_shape min val_func_process pre_img shape permute resize ceil BORDER_CONSTANT numpy cuda range cuda ascontiguousarray concatenate pad_image_to_shape transpose normalize BORDER_CONSTANT Tensor map isinstance items get join list isinstance _worker get_context map start is_grad_enabled Queue append range len list hasattr __data_parallel_replicate__ modules enumerate len replicate sum nanmean sum diag nanmean sum histogram copy spacing sum sum float sum format isinstance print Conv2d Upsample BatchNorm2d __name__ Linear kernel_size groups shape affine prod kernel_size groups kernel_size groups kernel_size isinstance kernel_size isinstance format Softmax isinstance AvgPool2d print MaxPool2d Conv2d Bilinear BatchNorm2d ConvTranspose2d __name__ Linear PReLU format isinstance print Conv2d BatchNorm2d __name__ Linear numel num_params numel num_params numel size numel num_params numel numel time model randn synchronize set_device eval info cuda range sum list format fillna str parameter_quantity ConvFlops name duration Series inference_memory apply MAdd append DataFrame Flops join find_child_index split range len join get_parent_node items add_child tolist len StatNode range split ModelStat show_report int map pad_image_to_shape BORDER_CONSTANT get_2dshape randint get_2dshape zeros uint32 get_2dshape copyMakeBorder map float resize resize int choice resize choice flip getRotationMatrix2D warpAffine random GaussianBlur choice randint Number isinstance astype float32 named_modules isinstance conv_init bias weight constant_ __init_weight isinstance isinstance bias dict modules append weight Linear reduce clone div_ all_reduce clone div_ load items time format join isinstance set OrderedDict warning load_state_dict info keys int list format join endswith device_count info append range split remove format system makedirs embed range len uint8 array set_img_color uint8 set_img_color zeros array column_stack append range insert tolist join print size nanmean append range insert TrainPre batch_size DistributedSampler target_size image_mean distributed niters_per_epoch DataLoader image_std world_size dataset | # TreeFilter-Torch By [Lin Song](https://linsong.me), [Yanwei Li](https://yanwei-li.com), [Zeming Li](http://www.zemingli.com), [Gang Yu](https://www.skicyyu.org), [Hongbin Sun](http://gr.xjtu.edu.cn/web/hsun/chinese), [Jian Sun](http://www.jiansun.org), Nanning Zheng. This project provides a cuda implementation for "[Learnable Tree Filter for Structure-preserving Feature Transform](https://arxiv.org/pdf/1909.12513.pdf)" (NeurIPS2019) on PyTorch. Multiple semantic segmentation experiments are reproduced to verify the effectiveness of tree filtering module on PASCAL VOC2012 and Cityscapes. For the reason that the experiments in the paper were conducted using internal framework, this project reimplements them on PyTorch and reports detailed comparisons below. In addition, many thanks to [TorchSeg](https://github.com/ycszen/TorchSeg).  ## Prerequisites - PyTorch 1.2 - `sudo pip3 install torch torchvision` - Easydict - `sudo pip3 install easydict` | 1,009 |
StevenHickson/CreateNormals | ['surface normals estimation', 'semantic segmentation'] | ['Floors are Flat: Leveraging Semantics for Real-Time Surface Normal Prediction'] | python/calc_normals.py NormalCalculation | # CreateNormals [The paper can be found here](https://arxiv.org/abs/1906.06792) if you use this code for a paper, please cite the following: ``` @inproceedings{Hickson_2019_ICCV_Workshops, author = {Hickson, Steven and Raveendran, Karthik and Fathi, Alireza and Murphy, Kevin and Essa, Irfan}, title = {Floors are Flat: Leveraging Semantics for Real-Time Surface Normal Prediction}, booktitle = {The IEEE International Conference on Computer Vision (ICCV) Workshops}, month = {Oct}, year = {2019} | 1,010 |
StevenLiuWen/sRNN_TSC_Anomaly_Detection | ['anomaly detection'] | ['A Revisit of Sparse Coding Based Anomaly Detection in Stacked RNN Framework'] | libs/feature_loader_multi_patch.py libs/sista_rnn_anomaly_detection.py libs/sista_rnn_anomaly_detection_coherence.py libs/sista_rnn.py libs/common.py tools/__init__.py run_anomaly_detection.py libs/FLAGS.py tools/ground_truth.py tools/init_path.py evaluate.py libs/base.py run_anomaly_detection_coherence.py RecordResult compute_eer evaluate load_loss_gt compute_auc parser_args plot_roc test_func main main parse_args base checkdir checkrank BatchAdvancer HDF5Reader SequenceGeneratorVideo advance_batch readHDF5 FeatureLoader sista_rnn dim_reduction matmul sista_rnn_anomaly_detection_AE sista_rnn_anomaly_detection_TSC sista_rnn_anomaly_detection dim_reduction sista_rnn_anomaly_detection sista_rnn_anomaly_detection_AE matmul GroundTruthLoader add_path add_argument ArgumentParser gt_loader GroundTruthLoader len max show str ylabel title ylim gca append range plot concatenate load_loss_gt compute_auc annotate auc xlabel min roc_curve add_patch Rectangle array len RecordResult concatenate glob print load_loss_gt roc_curve shape any auc array range len AVENUE gt_loader format GroundTruthLoader print name random append plot_roc range len format eval_func print dataset auc join format RandomState asarray sista_rnn_anomaly_detection_TSC print train float32 placeholder test checkdir uniform ConfigProto Session sista_rnn_anomaly_detection_AE add_argument ArgumentParser makedirs list map sequence_generator array len as_list reshape AVENUE PED1 PED2 EXIT ENTRANCE SHANGHAITECH MOVINGMNIST append | # A revisit of sparse coding based anomaly detection in stacked rnn framework This repo is the official open source of [A revisit of sparse coding based anomaly detection in stacked rnn framework, ICCV 2017] It is implemented on tensorflow. Please follow the instructions to run the code. ## 1. Installation (Anaconda with python3.6 installation is recommended) * Install 3rd-package dependencies of python (listed in requirements.txt) ``` numpy==1.15.4 matplotlib==2.2.2 scikit_image==0.13.1 six==1.11.0 | 1,011 |
StonyBrookNLP/PerSenT | ['sentiment analysis'] | ["Author's Sentiment Prediction"] | pre_post_processing_steps/6_get_final_sentiment_compute_agreement_create_reannotation.py pre_post_processing_steps/8_masked_data_prepare_datasets.py pre_post_processing_steps/1_preprocess_EMNLP.py pre_post_processing_steps/2_core.py pre_post_processing_steps/10_data_stats.py pre_post_processing_steps/[7]_combine_4_votes.py pre_post_processing_steps/4_prepare_mturk_input.py pre_post_processing_steps/5_save_links.py pre_post_processing_steps/7_seperate_train_test.py pre_post_processing_steps/11_masked_lm_prepare_datasets.py pre_post_processing_steps/data_distribution.py pre_post_processing_steps/9_combine_pre_new_sets.py pre_post_processing_steps/3_entity_recognition.py word_distribution plot_class_distribution paragraph_distribution count_entities entity_frequency prepare_date map_title_doc del_unsused_doc_title word_distribution plot_class_distribution separate_train_test paragraph_distribution entity_frequency data_dist enclose_mask_entity data_dist tolist print pie axis title savefig figure plot Counter savefig figure most_common DataFrame xlabel print tolist len ylabel axis set hist savefig figure append sum split xlabel print tolist axis ylabel set hist savefig figure split append sum len time word_tokenize list replace print len tag append DataFrame range count list print len range set append DataFrame read_csv values open print len to_csv append keys read_csv drop remove print tolist head choice append DataFrame drop_duplicates read_csv len list start_char sorted replace print text Series nlp coref_clusters append end_char len append notnull split len | ## What is PerSenT? ### Person SenTiment, a challenge dataset for author's sentiment prediction in news domain. You can find our paper [Author's sentiment prediction](https://arxiv.org/abs/2011.06128) Mohaddeseh Bastan, Mahnaz Koupaee, Youngseo Son, Richard Sicoli, Niranjan Balasubramanian. COLING2020 We introduce PerSenT, a crowd-sourced dataset that captures the sentiment of an author towards the main entity in a news article. This dataset contains annotation for 5.3k documents and 38k paragraphs covering 3.2k unique entities. ### Example In the following example we see a 4-paragraph document about an entity (Donald Trump). Each paragraph is labeled separately and finally the author's sentiment towards the whole document is mentioned in the last row. <a href="https://github.com/MHDBST/PerSenT/blob/main/example2.png?raw=true"><img src="https://github.com/MHDBST/PerSenT/blob/main/example2.png?raw=true" alt="Image of PerSenT stats"/></a> ### Dataset Statistics To split the dataset, we separated the entities into 4 mutually exclusive sets. Due to the nature of news collections, some entities tend to dominate the collection. In our collection,there were four entities which were the main entity in nearly 800 articles. To avoid these entities from dominating the train or test splits, we moved them to a separate test collection. We split the remaining into a training, dev, and test sets at random. Thus our collection includes one standard test set consisting of articles drawn at random (Test Standard), while the other is a test set which contains multiple articles about a small number of popular entities (Test Frequent). | 1,012 |
StonyBrookNLP/SLDS-Stories | ['text generation'] | ['Generating Narrative Text in a Switching Dynamical System'] | scripts/viterbi.py S2S/EncDec.py EncDec.py scripts/sentiment_tag_rocstories.py scripts/gen_ref_for_decoding.py S2S/s2sa.py S2S/generate.py S2S/utils.py generate.py utils.py Sampler.py SLDS.py S2S/train.py data_utils.py LM.py train_alternate.py scripts/run_rouge.py scripts/convert_to_turk.py gibbs_interpolate.py scripts/calculate_transition.py generate_lm.py train_LM.py masked_cross_entropy.py ExtendableField load_vocab create_vocab RocStoryBatches RocStoryDataset S2SSentenceDataset transform RocStoryClozeDataset sentiment_label_vocab RocStoryDatasetSentiment gather_last sequence_mask Decoder fix_enc_hidden Encoder EncDecBase kl_divergence generate generate check_save_model_path generate LM compute_loss_unsupervised compute_loss_supervised compute_loss_unsupervised_LM _sequence_mask masked_cross_entropy print_iter_stats GibbsSampler SLDS check_save_model_path tally_parameters compute_loss_unsupervised compute_loss_supervised train print_iter_stats train print_iter_stats normal_sample top_k_logits normal_sample_deterministic gumbel_softmax_sample gumbel_sample get_context_vector gather_last sequence_mask Decoder fix_enc_hidden Encoder EncDecBase Attention generate S2SWithA train get_data_loader check_path_no_create variable build_transition get_stories write_for_turk rouge_log get_sents do_rouge write_for_rouge make_html_safe rouge_eval test_sentiment_tag get_sentiment sentiment_tag decode load Vocab Counter zero_ view numel cat Variable zeros cuda itos set_use_cuda combine_story compute_loss_unsupervised convert_to_target nll RocStoryClozeDataset cuda exp load_model squeeze exit RocStoryBatches vocab format eval cloze valid_data keys flush enumerate load print min load_vocab RocStoryDataset transform Tensor len model compute_loss_unsupervised_LM interpolate initial_sents reconstruct num_samples dirname abspath makedirs GibbsSampler train_data aggregate_gibbs_sample sentiment_label_vocab Variable Tensor cuda range print_iter_stats Variable mean Tensor cuda range print_iter_stats Variable mean Tensor range cuda CrossEntropyLoss print_iter_stats print max Variable size expand cuda expand_as long is_cuda view log_softmax size _sequence_mask float sum cuda print sum load_vectors save_model combine_story batch_size model compute_loss_unsupervised zero_grad convert_to_target compute_loss_supervised GloVe rnn_hidden_size save cuda clip open sentiment load_model squeeze Adam epochs RocStoryBatches use_pretrained vocab SLDS RocStoryDatasetSentiment format load_opt generative_params variational_params emb_size clip_grad_norm sentiment_label_vocab keys combine_sentiment_labels enumerate load time backward print min load_vocab RocStoryDataset parameters Tensor step hidden_size len compute_loss_unsupervised_LM LM exp randn Variable size cuda exp cuda size gumbel_sample topk expand_as mean BatchIter target S2SSentenceDataset tolist masked_cross_entropy text out_vocab BatchIter S2SSentenceDataset train_data batch_size get_data_loader clip_grad_norm_ target dataset expt_name S2SWithA ceil start_epoch float masked_cross_entropy int enc_hid_size text out_vocab dec_hid_size cuda makedirs print zeros sum zip get_stories format print append range len WARNING Rouge155 convert_and_evaluate setLevel print replace append join index split print rouge_log format print write_for_rouge rouge_eval rmtree mkdir exists polarity_scores SentimentIntensityAnalyzer format print SentimentIntensityAnalyzer sum flush len format print zeros argmax max range len | Code for Generating Narrative Text in a Switching Dynamical System Structure: 1. SLDS.py has the code for the proposed model. 2. train\_alternate.py and generate.py are used to train the model and for inference. 3. Lm.py has the code for the baseline Language model. 4. train\_LM.py and generate\_lm.py is used to train the model and for inference. 5. S2S folder has code for the code for the baseline sequence to sequence model with attention. 6. scripts folder has various scripts used for the project. 7. Sampler.py has thr main code for Gibbs sampling andgibbs\_interpolate.py is used to perform interpolations. | 1,013 |
Stream-AD/MIDAS | ['anomaly detection'] | ['Real-Time Anomaly Detection in Edge Streams', 'MIDAS: Microcluster-Based Detector of Anomalies in Edge Streams'] | util/DeleteTempFile.py util/EvaluateScore.py util/ReproduceROC.py util/PreprocessData.py darpa_original codes concat astype to_csv read_csv | # MIDAS <p> <a href="https://aaai.org/Conferences/AAAI-20/"> <img src="http://img.shields.io/badge/AAAI-2020-red.svg"> </a> <a href="https://arxiv.org/pdf/2009.08452.pdf"><img src="http://img.shields.io/badge/Paper-PDF-brightgreen.svg"></a> <a href="https://www.comp.nus.edu.sg/~sbhatia/assets/pdf/MIDAS_slides.pdf"> <img src="http://img.shields.io/badge/Slides-PDF-ff9e18.svg"> </a> <a href="https://youtu.be/Bd4PyLCHrto"> | 1,014 |
SuLab/crowdflower_relation_verification | ['relation extraction'] | ['Exposing ambiguities in a relation-extraction gold standard with crowdsourcing'] | src/true_relation_type.py src/file_util.py src/get_pubmed_abstract.py src/broad_reltype_performance.py src/unicode_to_ascii.py src/aggregate_votes.py src/filter_data.py src/check_settings.py aggregate_votes determine_broad_reltype_results has_proper_settings make_dir read_file filter_data query_ncbi get_abstract_information get_pubmed_article_xml_tree parse_article_xml_tree split_abstract get_euadr_relation_type convert_unicode_to_ascii append groupby sum len groupby insert sort map aggregate_votes append sum makedirs join format map query read_csv urlopen format query_ncbi text iter get format isinstance text convert_unicode_to_ascii iter append convert_unicode_to_ascii get_pubmed_article_xml_tree parse_article_xml_tree isinstance dict groupby filter_data normalize replace | ### Data and Code for *Exposing ambiguities in a relation-extraction gold standard with crowdsourcing* Last updated 2015-04-14 Tong Shu Li This repository contains the code and data used to generate *Exposing ambiguities in a relation-extraction gold standard with crowdsourcing*. Any questions can be sent to `[email protected]` ### Contents - **crowdflower/**: this directory contains all of the instructions and markup for CrowdFlower job 710587, which was used to gather the data analyzed in the paper. - **data/**: this directory contains the CrowdFlower output data as well as other data. - **src/**: this directory contains all of the source code referenced by the iPython notebooks. - **create_work_units.ipynb**: Code for randomly selecting some drug-disease relationships to show to the crowd. - **demographic_analysis.ipynb**: An analysis of the countries of origin of the task participants. | 1,015 |
SudhanshuMishra8826/Style-Transfer-On-Images | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | Algo2/styleTransfer.py Algo2/styleTransferOnVideos.py Algo1/styleTransferAlgo1.py Evaluator gram_matrix eval_loss_and_grads content_loss total_variation_loss style_loss dot transpose batch_flatten permute_dimensions gram_matrix square reshape astype f_outputs | # Style Transfer on Images The problem tackled in this report is Transfering the style of on image to contents of another image and thus creating a third image having the style similar to first image and content similar to second one. For achieving this I have used approaches from two different research papers and implemented them. ## Style transfer on Images using “Leon A.Gatys 's paper on ## A Neural Algorithm of Artistic Style” In this approach we used a VGG network with function to combine both the style and content loss and then by optimizing this loss in one iteration we will reduce the style and content loss | 1,016 |
SunnerLi/Cross-you-in-style | ['style transfer'] | ['Crossing You in Style: Cross-modal Style Transfer from Music to Visual Arts'] | Models/Classifier.py DataLoader/DatasetUtils.py Models/layer.py Models/loss.py Utils.py Models/model.py Models/Unet.py DataLoader/GetLoader.py evaluate.py DataLoader/WikiartDataset.py Models/generate.py batch_paint.py mkdir train get_gram presentParameters dumpWav dumpPaint sampleGt dumpMal dumpMusicLatent sample dumpYear gram_matrix gridTranspose Log mkdir normalize_batch label2year getArgs getTransform getLoaderEvaluate WikiartDatasetEvaluate UnetInDown Classifier basicConv UnetDown UnetUp UnetOutUp Generator Self_Attn Discriminator SpectralNorm2d LayerNorm l2normalize calc_gradient_penalty StyleLoss PerceptualLoss SVN UnetInDown basicConv UnetDown UnetUp Unet UnetOutUp format system model resample cuda squeeze dumpPaint append to format SVN getLoaderEvaluate eval getTransform mkdir item save_output_path enumerate load join print load_model_path Log system tqdm numpy len presentParameters add_argument ArgumentParser vars parse_args sorted format Log keys makedirs size stack append range enumerate len year_interval year_base join uint8 format imwrite transpose astype join format save join format join format save join format squeeze Log dumpPaint dumpMal dumpYear mkdir save_output_path numpy enumerate bmm size transpose view view div_ vgg cuda normalize_batch print format Compose paint_resize_min_edge DataLoader WikiartDatasetEvaluate Variable size rand expand mean netD cuda | # Crossing You in Style: Cross-modal Style Transfer from Music to Visual Arts




[[Project page]](https://sunnerli.github.io/Cross-you-in-style/) [[Arxiv paper]](https://arxiv.org/abs/2009.08083)
[[Dataset]](https://drive.google.com/drive/folders/1XgrXx1qKd8etj9-75ma_8z1tlO8Y49tE)
| 1,017 |
SurajDonthi/Clean-ST-ReID-Multi-Target-Multi-Camera-Tracking | ['person retrieval', 'person re identification'] | ['Beyond Part Models: Person Retrieval with Refined Part Pooling (and a Strong Convolutional Baseline)'] | mtmct_reid/eval.py mtmct_reid/engine.py mtmct_reid/data.py mtmct_reid/re_ranking.py mtmct_reid/train.py mtmct_reid/utils.py mtmct_reid/model.py mtmct_reid/rough.py mtmct_reid/metrics.py ReIDDataModule ReIDDataset ST_ReID generate_features main st_distribution smooth_st_distribution joint_scores gaussian_kernel mAP gaussian_func AP_CMC ClassifierBlock weights_init_classifier weights_init_kaiming PCB k_reciprocal_neigh re_ranking train plot_distributions standardize l2_norm_standardize save_args fliplr get_ids l2_norm_standardize cpu tensor Tensor cat load re_ranking print joint_scores Compose generate_features DataLoader eval ReIDDataset load_state_dict mAP model_path re_rank PCB st_distribution_path int sum unique zip zeros abs range len pow sqrt pi zeros arange range gaussian_func st_distribution dot gaussian_kernel sum range int16 exp matmul standardize type zip Tensor tensor abs cat setdiff1d argsort flatten intersect1d argwhere zero_ in1d append range len zero_ zip float AP_CMC len data normal_ kaiming_normal_ constant_ data normal_ constant_ zeros_like around max list exp transpose append sum range astype mean unique minimum int print float32 argpartition k_reciprocal_neigh zeros len num_classes experiment fit ST_ReID test Path save_args ModelCheckpoint save_dir TestTubeLogger from_argparse_args makedirs int stem append range enumerate split index_select long norm view size div expand_as append T sum subplots plot enumerate | # Multi-Camera Person Re-Identification
This repository is inspired by the paper [Spatial-Temporal Reidentification (ST-ReID)](https://arxiv.org/abs/1812.03282v1)[1]. The state-of-the-art for Person Re-identification tasks. This repository offers a flexible, and easy to understand clean implementation of the model architecture, training and evaluation.
This repository has been trained & tested on [DukeMTMTC-reID](https://megapixels.cc/duke_mtmc/) and [Market-1501 datasets](https://www.kaggle.com/pengcw1/market-1501). The model can be easily trained on any new datasets with a few tweaks to parse the files!
>You can do a quick run on Google Colab: [](https://colab.research.google.com/github/SurajDonthi/Multi-Camera-Person-Re-Identification/blob/master/demo.ipynb)
Below are the metrics on the various datasets.
| 1,018 |
SurajDonthi/Multi-Camera-Person-Re-Identification | ['person retrieval', 'person re identification'] | ['Beyond Part Models: Person Retrieval with Refined Part Pooling (and a Strong Convolutional Baseline)', 'Spatial-Temporal Person Re-identification'] | mtmct_reid/eval.py mtmct_reid/engine.py mtmct_reid/data.py mtmct_reid/re_ranking.py mtmct_reid/train.py mtmct_reid/utils.py mtmct_reid/model.py mtmct_reid/rough.py mtmct_reid/metrics.py ReIDDataModule ReIDDataset ST_ReID generate_features main st_distribution smooth_st_distribution joint_scores gaussian_kernel mAP gaussian_func AP_CMC ClassifierBlock weights_init_classifier weights_init_kaiming PCB k_reciprocal_neigh re_ranking train plot_distributions standardize l2_norm_standardize save_args fliplr get_ids l2_norm_standardize cpu tensor Tensor cat load re_ranking print joint_scores Compose generate_features DataLoader eval ReIDDataset load_state_dict mAP model_path re_rank PCB st_distribution_path int sum unique zip zeros abs range len pow sqrt pi zeros arange range gaussian_func st_distribution dot gaussian_kernel sum range int16 exp matmul standardize type zip Tensor tensor abs cat setdiff1d argsort flatten intersect1d argwhere zero_ in1d append range len zero_ zip float AP_CMC len data normal_ kaiming_normal_ constant_ data normal_ constant_ zeros_like around max list exp transpose append sum range astype mean unique minimum int print float32 argpartition k_reciprocal_neigh zeros len num_classes experiment fit ST_ReID test Path save_args ModelCheckpoint save_dir TestTubeLogger from_argparse_args makedirs int stem append range enumerate split index_select long norm view size div expand_as append T sum subplots plot enumerate | # Multi-Camera Person Re-Identification
This repository is inspired by the paper [Spatial-Temporal Reidentification (ST-ReID)](https://arxiv.org/abs/1812.03282v1)[1]. The state-of-the-art for Person Re-identification tasks. This repository offers a flexible, and easy to understand clean implementation of the model architecture, training and evaluation.
This repository has been trained & tested on [DukeMTMTC-reID](https://megapixels.cc/duke_mtmc/) and [Market-1501 datasets](https://www.kaggle.com/pengcw1/market-1501). The model can be easily trained on any new datasets with a few tweaks to parse the files!
>You can do a quick run on Google Colab: [](https://colab.research.google.com/github/SurajDonthi/Multi-Camera-Person-Re-Identification/blob/master/demo.ipynb)
Below are the metrics on the various datasets.
| 1,019 |
SurajDonthi/Multi-Target-Multi-Camera-Tracking-ST-ReID | ['person retrieval', 'person re identification'] | ['Beyond Part Models: Person Retrieval with Refined Part Pooling (and a Strong Convolutional Baseline)'] | mtmct_reid/eval.py mtmct_reid/engine.py mtmct_reid/data.py mtmct_reid/re_ranking.py mtmct_reid/train.py mtmct_reid/utils.py mtmct_reid/model.py mtmct_reid/rough.py mtmct_reid/metrics.py ReIDDataModule ReIDDataset ST_ReID generate_features main st_distribution smooth_st_distribution joint_scores gaussian_kernel mAP gaussian_func AP_CMC ClassifierBlock weights_init_classifier weights_init_kaiming PCB k_reciprocal_neigh re_ranking train plot_distributions standardize l2_norm_standardize save_args fliplr get_ids l2_norm_standardize cpu tensor Tensor cat load re_ranking print joint_scores Compose generate_features DataLoader eval ReIDDataset load_state_dict mAP model_path re_rank PCB st_distribution_path int sum unique zip zeros abs range len pow sqrt pi zeros arange range gaussian_func st_distribution dot gaussian_kernel sum range int16 exp matmul standardize type zip Tensor tensor abs cat setdiff1d argsort flatten intersect1d argwhere zero_ in1d append range len zero_ zip float AP_CMC len data normal_ kaiming_normal_ constant_ data normal_ constant_ zeros_like around max list exp transpose append sum range astype mean unique minimum int print float32 argpartition k_reciprocal_neigh zeros len num_classes experiment fit ST_ReID test Path save_args ModelCheckpoint save_dir TestTubeLogger from_argparse_args makedirs int stem append range enumerate split index_select long norm view size div expand_as append T sum subplots plot enumerate | # Multi-Camera Person Re-Identification
This repository is inspired by the paper [Spatial-Temporal Reidentification (ST-ReID)](https://arxiv.org/abs/1812.03282v1)[1]. The state-of-the-art for Person Re-identification tasks. This repository offers a flexible, and easy to understand clean implementation of the model architecture, training and evaluation.
This repository has been trained & tested on [DukeMTMTC-reID](https://megapixels.cc/duke_mtmc/) and [Market-1501 datasets](https://www.kaggle.com/pengcw1/market-1501). The model can be easily trained on any new datasets with a few tweaks to parse the files!
>You can do a quick run on Google Colab: [](https://colab.research.google.com/github/SurajDonthi/Multi-Camera-Person-Re-Identification/blob/master/demo.ipynb)
Below are the metrics on the various datasets.
| 1,020 |
SurajDonthi/Multi-Target-Multi-Camera-Tracking-ST-ReID-Clean-Code | ['person retrieval', 'person re identification'] | ['Beyond Part Models: Person Retrieval with Refined Part Pooling (and a Strong Convolutional Baseline)'] | mtmct_reid/eval.py mtmct_reid/engine.py mtmct_reid/data.py mtmct_reid/re_ranking.py mtmct_reid/train.py mtmct_reid/utils.py mtmct_reid/model.py mtmct_reid/rough.py mtmct_reid/metrics.py ReIDDataModule ReIDDataset ST_ReID generate_features main st_distribution smooth_st_distribution joint_scores gaussian_kernel mAP gaussian_func AP_CMC ClassifierBlock weights_init_classifier weights_init_kaiming PCB k_reciprocal_neigh re_ranking train plot_distributions standardize l2_norm_standardize save_args fliplr get_ids l2_norm_standardize cpu tensor Tensor cat load re_ranking print joint_scores Compose generate_features DataLoader eval ReIDDataset load_state_dict mAP model_path re_rank PCB st_distribution_path int sum unique zip zeros abs range len pow sqrt pi zeros arange range gaussian_func st_distribution dot gaussian_kernel sum range int16 exp matmul standardize type zip Tensor tensor abs cat setdiff1d argsort flatten intersect1d argwhere zero_ in1d append range len zero_ zip float AP_CMC len data normal_ kaiming_normal_ constant_ data normal_ constant_ zeros_like around max list exp transpose append sum range astype mean unique minimum int print float32 argpartition k_reciprocal_neigh zeros len num_classes experiment fit ST_ReID test Path save_args ModelCheckpoint save_dir TestTubeLogger from_argparse_args makedirs int stem append range enumerate split index_select long norm view size div expand_as append T sum subplots plot enumerate | # Multi-Camera Person Re-Identification
This repository is inspired by the paper [Spatial-Temporal Reidentification (ST-ReID)](https://arxiv.org/abs/1812.03282v1)[1]. The state-of-the-art for Person Re-identification tasks. This repository offers a flexible, and easy to understand clean implementation of the model architecture, training and evaluation.
This repository has been trained & tested on [DukeMTMTC-reID](https://megapixels.cc/duke_mtmc/) and [Market-1501 datasets](https://www.kaggle.com/pengcw1/market-1501). The model can be easily trained on any new datasets with a few tweaks to parse the files!
>You can do a quick run on Google Colab: [](https://colab.research.google.com/github/SurajDonthi/Multi-Camera-Person-Re-Identification/blob/master/demo.ipynb)
Below are the metrics on the various datasets.
| 1,021 |
Svito-zar/gesticulator | ['gesture generation'] | ['Gesticulator: A framework for semantically-aware speech-driven gesture generation'] | gesticulator/visualization/pymo/preprocessing_old.py gesticulator/data_processing/text_features/compare_bert_versions.py gesticulator/hyper_param_search/scheduled_search.py gesticulator/visualization/motion_visualizer/model_animator.py gesticulator/train.py gesticulator/visualization/motion_visualizer/convert2bvh.py gesticulator/visualization/pymo/preprocessing.py gesticulator/visualization/pymo/slask.py gesticulator/data_processing/pca_gestures.py gesticulator/interface/gesture_predictor.py gesticulator/data_processing/SGdataset.py gesticulator/data_processing/text_features/parse_json_transcript.py gesticulator/visualization/motion_visualizer/bvh2npy.py gesticulator/hyper_param_search/schedulers.py gesticulator/visualization/setup.py gesticulator/visualization/motion_visualizer/bvh_helper.py gesticulator/visualization/pymo/rotation_tools_bkp.py gesticulator/evaluate.py gesticulator/data_processing/text_features/syllable_count.py gesticulator/visualization/pymo/rotation_tools.py gesticulator/data_processing/process_dataset.py gesticulator/visualization/pymo/features.py gesticulator/obj_evaluation/calc_errors.py gesticulator/visualization/motion_visualizer/visualize.py gesticulator/visualization/pymo/parsers.py install_script.py gesticulator/visualization/motion_visualizer/generate_videos.py gesticulator/hyper_param_search/random_search.py gesticulator/visualization/pymo/viz_tools.py gesticulator/data_processing/data_params.py gesticulator/data_processing/tools.py gesticulator/hyper_param_search/ray_trainable.py gesticulator/obj_evaluation/calc_histogram.py gesticulator/visualization/motion_visualizer/read_bvh.py gesticulator/obj_evaluation/calc_jerk.py gesticulator/data_processing/split_dataset.py gesticulator/model/model.py gesticulator/data_processing/test_audio.py gesticulator/visualization/pymo/writers.py demo/demo.py gesticulator/data_processing/bvh2features.py gesticulator/model/final_model.py gesticulator/data_processing/features2bvh.py dataset/rename_data_files.py gesticulator/config/model_config.py gesticulator/obj_evaluation/plot_hist.py setup.py gesticulator/visualization/pymo/Pivots.py gesticulator/model/prediction_saving.py gesticulator/visualization/pymo/data.py gesticulator/visualization/pymo/Quaternions.py main check_feature_type truncate_audio parse_args main create_save_dirs add_test_script_arguments create_logger main add_training_script_arguments ModelSavingCallback construct_model_config_parser extract_joint_angles feat2bvh create_embedding _save_data_as_sequences create_dataset _save_dataset _encode_vectors SpeechGestureDataset ValidationDataset check_dataset_directories create_dataset_splits copy_files _create_data_directories _format_datasets _files_to_pandas_dataframe create_bvh derivative calculate_mfcc average extract_prosodic_features compute_prosody calculate_spectrogram shorten encode_json_transcript_with_bert check_json_transcript encode_json_transcript_with_bert_DEPRECATED json_time_to_deciseconds merge_subword_encodings get_bert_embedding extract_extra_features extract_word_attributes count_syllables syl_count TrainableTrainer MyEarlyStopping main MyAsyncHyperBandScheduler MyAsyncHyperBandScheduler MyFIFOScheduler _Bracket ASHAv2 GesturePredictor My_Model weights_init_he weights_init_zeros GesticulatorModel weights_init_he weights_init_zeros PredictionSavingMixin read_joint_names remove_velocity MAE APE main main compute_velocity save_result compute_acceleration main save_result compute_acceleration compute_jerks read_csv read_joint_names convert_bvh2npy load parse_bvh_node BVHChannel loads BVHNode write_bvh generate_videos create_arg_parser visualize create_video AnimateSkeletons ModelSkeletons append obtain_coords read_bvh_to_array convert_bvh2npy visualize Joint MocapData plot_foot_up_down create_foot_contact_signal get_foot_contact_idxs BVHParser BVHScanner Pivots RootCentricPositionNormalizer Mirror JointSelector Flattener ConstantsRemover ReverseTime Numpyfier EulerReorder MocapParameterizer DownSampler TemplateTransform ListMinMaxScaler Slicer RootTransformer ListStandardScaler RootCentricPositionNormalizer Mirror JointSelector Flattener ConstantsRemover ReverseTime Numpyfier EulerReorder MocapParameterizer DownSampler TemplateTransform ListMinMaxScaler Slicer RootTransformer ListStandardScaler Quaternions euler_reorder Rotation euler2expmap2 expmap2euler euler_reorder2 rad2deg deg2rad unroll_2 euler2expmap unroll offsets offsets_inv unroll_1 Rotation expmap2euler euler2expmap rad2deg deg2rad EulerReorder Offsetter nb_play_mocap_fromurl draw_stickfigure draw_stickfigure3d viz_cnn_filter sketch_move print_skel nb_play_mocap save_fig BVHWriter check_feature_type int remove model_file detach print text video_out predict_gestures audio call GesturePredictor split load_from_checkpoint visualize load print exit load write_wav replace add_argument ArgumentParser create_save_dirs eval generate_evaluation_videos join generated_gestures_dir makedirs add_argument join GesticulatorModel create_logger save_checkpoint save_dir from_argparse_args fit rpartition add_argument add_argparse_args ArgumentParser add join list BVHParser parse dump savez print append Pipeline fit_transform load BVHWriter print shape inverse_transform load encode_json_transcript_with_bert isinstance concatenate print future_context min exit seq_len calculate_mfcc past_context shape extract_prosodic_features calculate_spectrogram shorten array len join _save_data_as_sequences proc_data_dir _save_dataset read_csv makedirs join save trange _encode_vectors len join concatenate print save trange _encode_vectors len print from_pretrained exit join copy zfill isfile join copy_files to_csv _create_data_directories _format_datasets range print join abspath makedirs print _files_to_pandas_dataframe range join format print zfill abspath append getsize join print exit abspath split convolve copy min len int len load melspectrogram abs log mfcc read transpose arange derivative from_file min transpose average stack compute_prosody len get_total_duration eps asarray arange log nan_to_num to_pitch to_intensity Sound clip list format print exit get_bert_embedding extract_extra_features extend any extract_word_attributes append array len join merge_subword_encodings unsqueeze encode numpy convert_ids_to_tokens strip mean iter startswith append print float float rstrip json_time_to_deciseconds print exit extend extract_extra_features any extract_word_attributes append array len findall lower enumerate len dict lower items Namespace Trainer ModelCheckpoint setattr fill_ in_features normal_ sqrt out_features __name__ zeros_ data __name__ arange mean empty range mean_absolute_error zeros norm range ArgumentParser sorted basename std shape predicted parse_args append format glob upper mean zip original load remove_velocity add_argument min out makedirs norm reshape zeros range diff zeros norm range diff print join format makedirs arange warn show str save_result ylabel title legend width sum range plot concatenate size tight_layout stack xlabel measure histogram len norm mean zeros range diff mean reshape read_bvh_to_array save concatenate name filter append BVHNode Bvh parse_bvh_node load inverse_transform BVHWriter range write_bvh convert_bvh2npy create_video load join remove print isfile listdir visualize add_argument ArgumentParser AnimateSkeletons getSkeletons animate ModelSkeletons initLines save reshape children append load_frame range apply_transformation load obtain_coords array len indexes get_foot_contact_idxs append range values len index get_foot_contact_idxs plot values norm reshape pi copy tile append range diff norm reshape pi copy tile range einsum diff from_euler rad2deg deg2rad euler print deg2rad rad2deg mat2euler lower euler2mat qinverse quat2euler euler2quat deg2rad rad2deg lower qmult quat2euler euler2quat deg2rad rad2deg lower qmult from_euler deg2rad angle_axis lower euler2axangle deg2rad axangle2euler lower norm array savefig tight_layout plot add_subplot scatter figure annotate keys values plot text add_subplot scatter figure keys values plot add_subplot figure keys range values T plot axis imshow scatter figure subplot2grid keys range enumerate pop print append len BVHWriter list columns remove to_csv join list columns remove replace str to_csv realpath dirname | [](https://youtu.be/VQ8he6jjW08) This repository contains PyTorch based implementation of the ICMI 2020 Best Paper Award recipient paper [Gesticulator: A framework for semantically-aware speech-driven gesture generation](https://svito-zar.github.io/gesticulator/). ## 0. Set up ### Requirements - python3.6+ - ffmpeg (for visualization) ### Installation **NOTE**: during installation, there will be several error messages (one for bert-embedding and one for mxnet) about conflicting packages - please ignore them, they don't affect the functionality of the repository. - Clone the repository: ``` | 1,022 |
Svobikl/cz_corpus | ['word embeddings'] | ['New word analogy corpus for exploring embeddings of Czech words'] | Evaluator.py | # cz_corpus
The word embedding methods have been proven to be very
useful in many tasks of NLP (Natural Language Processing). Much has
been investigated about word embeddings of English words and phrases,
but only little attention has been dedicated to other languages.
Our goal in this paper is to explore the behavior of state-of-the-art
word embedding methods on Czech, the language that is characterized
by very rich morphology. We introduce new corpus for word analogy
task that inspects syntactic, morphosyntactic and semantic properties
| 1,023 |
SwordHolderSH/neural-style-pytorch | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | utils.py neural_style.py vgg.py closure save_image_epoch normalize gram_matrix normalize_batch load_image Vgg16 save_image_epoch format backward abs print zero_grad clamp_ relu4_3 zip vgg sum mse_loss resize ANTIALIAS open bmm size transpose view view div_ join str squeeze clone unloader save cuda | # neural-style-pytorch A simple PyTorch implementation of "A Neural Algorithm of Artistic Style" ## introduction I try some other codes for neural-style-pytorch, but their outputs may become noise in some epochs, such as epoch 45, 170 and 230 in Figure 1. I don't know why. Therefore, I simply implement the method of "A Neural Algorithm of Artistic Style (http://arxiv.org/abs/1508.06576). <div align=center> <img src="https://github.com/SwordHolderSH/neural-style-pytorch/blob/master/demos/output.png" width="900" /> </div> <p align="center">**Figure 1**</p> ## Results <p align="left"> The output as shown in Table 1. </p> <table> <tr> | 1,024 |
Swybino/PotatoNet | ['face detection'] | ['PyramidBox: A Context-assisted Single Shot Face Detector'] | data_handler.py MemTrack/memnet/access.py MemTrack/build_tfrecords/process_xml.py faceAlignment/face_alignment/__init__.py faceAlignment/examples/detect_landmarks_in_image_3D.py PyramidBox/nets/ssd.py faceAlignment/test/test_utils.py faceAlignment/test/facealignment_test.py faceAlignment/face_alignment/detection/sfd/sfd_detector.py utils.py PyramidBox/tf_extended/bboxes.py side_scripts/align_faces.py test.py MemTrack/memnet/memnet.py faceAlignment/face_alignment/api.py faceAlignment/face_alignment/detection/dlib/__init__.py MemTrack/config.py config.py PyramidBox/widerface_eval.py MemTrack/model.py PyramidBox/tf_utils.py MemTrack/memnet/rnn.py faceAlignment/face_alignment/detection/folder/folder_detector.py PyramidBox/makedir.py MemTrack/memnet/utils.py faceAlignment/face_alignment/utils.py faceAlignment/face_alignment/detection/sfd/detect.py MemTrack/tracking/demo.py PyramidBox/tf_extended/tensors.py PyramidBox/nets/np_methods.py faceAlignment/face_alignment/detection/sfd/bbox.py faceAlignment/face_alignment/detection/folder/__init__.py MemTrack/build_tfrecords/build_data_vid.py face_analyzer.py PyramidBox/demo.py faceAlignment/setup.py faceAlignment/face_alignment/models.py MemTrack/tracking/tracker.py faceAlignment/examples/detect_landmarks_in_image.py PyramidBox/nets/custom_layers.py tests/_test_iou.py eval/evaluation.py MemTrack/input.py PyramidBox/preprocessing/vgg_preprocessing.py faceAlignment/face_alignment/detection/core.py visualization.py PyramidBox/utility/visualization.py PyramidBox/preprocessing/inception_preprocessing.py init.py MemTrack/build_tfrecords/collect_vid_info.py PyramidBox/preprocessing/ssd_vgg_preprocessing.py face_alignment.py PyramidBox/AnchorSampling.py PyramidBox/tf_extended/metrics.py PyramidBox/train_model.py eval/label.py faceAlignment/face_alignment/detection/sfd/net_s3fd.py MemTrack/estimator.py PyramidBox/tf_extended/math.py PyramidBox/datasets/dataset_utils.py faceAlignment/face_alignment/detection/sfd/__init__.py PyramidBox/tf_extended/__init__.py faceAlignment/face_alignment/detection/dlib/dlib_detector.py MemTrack/build_tfrecords/generate_vidb.py PyramidBox/check_data_io.py PyramidBox/datasets/pascalvoc_to_tfrecords.py MemTrack/experiment.py PyramidBox/nets/ssd_common.py PyramidBox/preparedata.py PyramidBox/datasets/pascalvoc_datasets.py PyramidBox/preprocessing/tf_image.py PyramidBox/preprocessing/preprocessing_factory.py MemTrack/feature.py faceAlignment/test/FA_test.py faceAlignment/face_alignment/detection/__init__.py faceAlignment/test/smoke_test.py side_scripts/result_video.py MemTrack/memnet/addressing.py children_tracker.py detect_faces MainTracker get_data DataHandler save_data FaceAligner face_orientation FaceAnalizer load_init load_seq_video process_image run_tracker display_result rotate_landmarks get_angular_dist get_roi is_between bb_intersection_over_union get_bbox_dist is_point_in_bbox rotate_roi crop_roi is_bbox_in_bbox_list get_angle grad bbox_in_roi bbox_img_coord is_point_in_bbox_list get_bbox_dict_ang_pos get_list_disorder load_seq_video landmarks_img_coord reformat_bbox_coord bb_contained rotate_bbox VisualizerOpencv VisualizerPlt Evaluator Labeler find_version read plot plot FaceAlignment LandmarksType NetworkSize ResNetDepth Bottleneck conv3x3 FAN HourGlass ConvBlock appdata_dir shuffle_lr get_preds_fromhm _gaussian flip draw_gaussian transform crop FaceDetector DlibDetector FolderDetector nms decode bboxlog encode bboxloginv flip_detect detect pts_to_bb s3fd L2Norm SFDDetector Tester Tester Estimator experiment extract_feature get_key_feature conv2d bn_relu_conv2d conv2d_bn_relu _parse_example_proto _batch_input _generate_labels_overlap_py _distort_color _bbox_overlaps generate_labels_dist generate_input_fn _translate_and_strech _process_images generate_labels_overlap get_loss build_model ModeKeys model_fn get_train_op build_initial_state get_saver get_dist_error get_summary focal_loss get_predictions batch_conv get_cnn_feature _float_feature_list _int64_feature partition_vid _bytes_feature_list save_to_tfrecords Vid convert_to_example process_videos_area build_tfrecords EncodeJpeg _bytes_feature _float_feature process_videos process collect_video_info generate_vidb Vid BoundingBox process_xml get_int get_item find_num_bb MemoryAccess _reset_and_write attention_read _weighted_softmax update_usage cosine_similarity _vector_norms calc_allocation_weight attention MemNet rnn weights_summay run_tracker load_seq_config display_result calc_x_size get_new_state Model Tracker calc_z_size process_image PrepareData get_init_fn configure_optimizer update_model_scope reshape_list get_variables_to_train print_configuration add_variables_summaries configure_learning_rate TrainModel process_image download_and_uncompress_tarball image_to_tfexample write_label_file int64_feature bytes_feature float_feature read_label_file has_labels get_dataset_info _convert_to_example _add_to_tfrecord _get_output_filename run _get_dataset_filename _process_image abs_smooth_2 pad2d channel_to_last l2_normalization abs_smooth bboxes_nms ssd_bboxes_decode ssd_bboxes_select bboxes_nms_fast bboxes_jaccard bboxes_clip ssd_bboxes_select_layer bboxes_intersection bboxes_resize bboxes_sort PyramidBoxModel tf_ssd_bboxes_select_all_classes tf_ssd_bboxes_select_layer_all_classes tf_ssd_bboxes_encode tf_ssd_bboxes_select_layer tf_ssd_bboxes_encode_layer tf_ssd_bboxes_decode tf_ssd_bboxes_select tf_ssd_bboxes_decode_layer distorted_bounding_box_crop preprocess_for_train preprocess_for_eval preprocess_image distort_color apply_with_random_selector get_preprocessing tf_image_unwhitened distorted_bounding_box_crop np_image_unwhitened tf_summary_image preprocess_for_train preprocess_for_eval preprocess_image tf_image_whitened distort_color apply_with_random_selector _assert _Check3DImage bboxes_crop_or_pad fix_image_flip_shape random_flip_left_right _ImageDimensions _is_tensor resize_image resize_image_bboxes_with_crop_or_pad _aspect_preserving_resize preprocess_for_train _crop _central_crop _smallest_size_at_least _mean_image_subtraction preprocess_for_eval preprocess_image _random_crop bboxes_nms bboxes_matching bboxes_filter_overlap bboxes_filter_labels bboxes_nms_batch bboxes_jaccard bboxes_matching_batch bboxes_sort_all_classes bboxes_clip bboxes_intersection bboxes_filter_center bboxes_resize bboxes_sort safe_divide cummax _create_local streaming_tp_fp_arrays average_precision_voc12 precision_recall_values _safe_div _precision_recall average_precision_voc07 precision_recall _broadcast_weights streaming_precision_recall_arrays get_shape pad_axis bboxes_draw_on_img colors_subselect plt_bboxes draw_rectangle draw_bbox draw_lines to_video fromarray int uint8 bboxes_nms ssd_bboxes_select bboxes_sort bboxes_clip resize ssd_anchors_all_layers bboxes_resize run norm degrees mean cross arcsin array fromarray int uint8 bboxes_nms ssd_bboxes_select bboxes_sort bboxes_clip resize ssd_anchors_all_layers bboxes_resize run VideoCapture read join imwrite sorted print mkdir img_dir sequence_path listdir release makedirs join data_dir readlines append open ConfigProto FONT_HERSHEY_DUPLEX imwrite replace tuple putText save_img astype imshow rectangle split merge video_path split max get_roi int roi_min_size min max get_roi rotate_bound items list get_angle abs get_angle enumerate index len is_point_in_bbox read search M show add_subplot axis imshow figure plot3D view_init set_xlim scatter exp empty range sum _gaussian ones inverse eye transform zeros array resize view FloatTensor size add_ apply_ mul_ transform zeros float max range from_numpy ndimension join remove isdir close executable lstrip getenv prefix getattr startswith abspath dirname expanduser mkdir append log cat decode list view reshape transpose size zeros shape softmax zip append Tensor to array range len zeros detect flip shape min max train generate_input_fn Estimator conv2d max_pooling2d conv2d_bn_relu concat split conv2d batch_normalization batch_normalization relu as_list use_fc_key reshape average_pooling2d key_dim bn_relu_conv2d enqueue image sorted string_input_producer num_preprocess_threads cast append num_readers range min_queue_examples batch_join add_queue_runner glob QueueRunner _process_images _parse_example_proto join read dequeue int32 RandomShuffleQueue to_int32 patch_size crop_to_bounding_box z_exemplar_size top_k linspace random_uniform set_shape gather is_limit_search to_float max_strech_x append max_translate_x range random_shuffle max_search_range max_strech_z max_translate_z stack is_augment decode_jpeg minimum int convert_image_dtype _translate_and_strech convert_to_tensor minimum concat squeeze expand_dims reverse random_uniform crop_and_resize tile float round random_normal random_hue random_saturation random_brightness clip_by_value random_contrast parse_single_sequence_example count_nonzero dist tile zeros array range as_list reshape tolist set_shape py_func count_nonzero arange concatenate print reshape transpose hstack _bbox_overlaps pause stride imshow meshgrid zeros expand_dims append minimum prod maximum as_list reshape as_list reshape squeeze map_fn batch_normalization batch_conv rnn initial_state as_list sigmoid_cross_entropy_with_logits reduce_sum generate_labels_dist focal_loss scalar array use_focal_loss generate_labels_overlap as_list reshape stack tile mean_squared_error expand_dims argmax scalar trainable_variables learning_rate get_or_create_global_step l2_regularizer apply_regularization gradients clip_by_global_norm get_collection lr_decay weight_decay decay_circles AdamOptimizer UPDATE_OPS exponential_decay clip_gradients scalar get_loss get_train_op get_saver get_dist_error get_summary get_predictions get_cnn_feature get_cnn_feature get_saver batch_conv rnn get_cnn_feature int sorted print unique append sum array range len load join Thread basename append partition_vid TFRecordWriter print makedirs Coordinator tfrecords_path start open enumerate flush len height ymin print save_to_tfrecords tolist ymax dir objs xmax xmin acquire width append process release get_img_buffer enumerate print save_to_tfrecords tolist objs dir acquire append process release get_img_buffer enumerate tuple patch_size z_exemplar_size paste floor resize open new prod hstack astype sqrt tile crop join fix_aspect maximum context_amount z_scale img_path array mode len write convert_to_example SerializeToString zip enumerate FeatureLists SequenceExample Features join sorted write close open listdir len pop int join sorted dump max_trackid print trackid Vid process_xml open img_path listdir append split iter BoundingBox join parse get_int get_item getroot append find_num_bb range as_list reshape expand_dims reduce_sum softmax strengths_op expand_dims _vector_norms matmul dense as_list tanh conv2d expand_dims usage_decay stop_gradient squeeze top_k one_hot as_list dense tanh format reshape reduce_max conv2d softmax histogram expand_dims argmax as_list summary_display_step format weights_summay tuple write_weight reshape concat read_weight usage append memory range len as_list format histogram range join sorted otb_data_dir readlines append listdir open join save_path is_save fix_aspect context_amount sqrt z_scale repeat prod x_instance_size z_exemplar_size AccessState num_scale LSTMStateTuple append array append list isinstance join print_config makedirs int num_epochs_per_decay batch_size MomentumOptimizer AdagradOptimizer GradientDescentOptimizer AdamOptimizer RMSPropOptimizer AdadeltaOptimizer FtrlOptimizer name get_model_variables histogram append scalar checkpoint_path latest_checkpoint get_model_variables checkpoint_exclude_scopes IsDirectory startswith info append train_dir extend get_collection TRAINABLE_VARIABLES join urlretrieve print extractall stat join join index split TFExampleDecoder TFRecordReader join read format parse int encode findall print text getroot append find Example _convert_to_example write SerializeToString _process_image seed join sorted int print min shuffle mkdir ceil float listdir _get_dataset_filename range len minimum abs abs square where shape exp reshape zeros_like ssd_bboxes_decode reshape where shape argmax amax concatenate ssd_bboxes_select_layer append range len argsort minimum transpose maximum copy copy minimum transpose maximum minimum transpose maximum ones size logical_and where bboxes_jaccard shape logical_or range append maximum minimum ones while_loop stack zeros log stack exp get_shape dtype reshape reduce_max greater stack cast argmax random_uniform constant constant cast int32 uint8 astype copy tf_image_unwhitened expand_dims image draw_bounding_boxes _is_tensor as_list shape unstack is_fully_defined any with_rank get_shape set_shape pack greater_equal slice to_int32 logical_and with_dependencies Assert shape rank equal greater_equal reshape logical_and with_dependencies extend Assert shape rank random_uniform append range equal len append _crop range split convert_to_tensor to_float to_int32 greater cond convert_to_tensor resize_bilinear squeeze shape set_shape _smallest_size_at_least expand_dims to_float _aspect_preserving_resize random_flip_left_right set_shape random_uniform to_float set_shape _aspect_preserving_resize isinstance isinstance list get_shape list keys isinstance list keys as_list shape unstack is_fully_defined len append range isinstance len line rectangle FONT_HERSHEY_DUPLEX putText rectangle str FONT_HERSHEY_DUPLEX putText shape rectangle range int suptitle add_patch dict imshow figure Rectangle range join write VideoWriter imread listdir VideoWriter_fourcc release | # PotatoNet using PyramidBox for face Detection @article{DBLP:journals/corr/abs-1803-07737, author = {Xu Tang and Daniel K. Du and Zeqiang He and Jingtuo Liu}, title = {PyramidBox: {A} Context-assisted Single Shot Face Detector}, journal = {CoRR}, volume = {abs/1803.07737}, | 1,025 |
SydCaption/SAAT | ['video captioning'] | ['Syntax-Aware Action Targeting for Video Captioning'] | 3D-ResNets-PyTorch/models/resnet.py misc/extract_feats_2D.py misc/subject_extraction/subject_extraction.py misc/compare.py test_svo.py misc/extract_feats_motion.py model_svo.py 3D-ResNets-PyTorch/models/densenet.py 3D-ResNets-PyTorch/models/pre_act_resnet.py misc/extract_feats_roi.py dataloader_svo.py utils.py misc/subject_extraction/trigram_tagger.py misc/fetch_frame_size.py opts_svo.py 3D-ResNets-PyTorch/models/wide_resnet.py create_sequencelabel.py 3D-ResNets-PyTorch/models/resnext.py misc/extract_frames.py train_svo.py main encode_captions DataLoader MANet FeatExpander CaptionModel CrossEntropyCriterion FeatPool RewardCriterion RNNUnit to_contiguous parse_opts cpuStats validate language_eval check_model test memReport train decode_sequence compute_avglogp score load_gt_refs language_eval adjust_learning_rate compute_score get_self_critical_reward get_self_critical_reward2 array_to_str get_cst_reward get_fine_tuning_parameters DenseNet densenet201 densenet169 densenet264 _DenseLayer _DenseBlock _Transition densenet121 conv3x3x3 get_fine_tuning_parameters resnet50 downsample_basic_block resnet152 PreActivationBasicBlock resnet34 resnet200 PreActivationBottleneck resnet18 PreActivationResNet resnet101 conv3x3x3 get_fine_tuning_parameters ResNet downsample_basic_block resnet50 Bottleneck resnet152 resnet34 resnet200 resnet18 resnet10 BasicBlock resnet101 ResNeXtBottleneck conv3x3x3 get_fine_tuning_parameters resnet50 downsample_basic_block ResNeXt resnet152 resnet101 conv3x3x3 get_fine_tuning_parameters WideBottleneck resnet50 downsample_basic_block WideResNet res_init extract_feats extract_feats extract_frames word_freq_dist tag_sentences get_entities merge_multi_word_subject get_svo extract_subject download_document tokenize_sentences trained_tagger clean_document SubjectTrigramTagger concatenate min len repr shape info append zeros sum enumerate load append info open is_contiguous parse_args add_argument ArgumentParser is_tensor print size get_objects type Process print cpu_percent getpid version virtual_memory uuid4 join str dump remove model_file info open use_cst_after validate get_seq_per_img model labda clip_grad_norm_ zero_grad set_mixer_from scb_captions get_self_critical_reward use_rl_after adjust_learning_rate cuda exists set_current_epoch seq_length max ss_k basename exp ss_max_prob load_state_dict cst_increase_every ceil grad_clip update get_batch rl_criterion use_ss_after start_from set_seq_per_img get_current_epoch info is_available sample float get_cst_reward item load join time ss_prob int model_file isdir criterion Variable mixer_descrease_every set_ss_prob min backward dumps check_model parameters mixer_from empty_cache step data get_seq_per_img model get_batch_size round cuda open str ceil append sum range vocab update dump get_batch replace language_eval debug set_seq_per_img eval has_label info is_available sample zip cocofmt_file float enumerate uuid4 int join decode_sequence remove model_file compute_avglogp criterion reshape extend get_num_videos reset empty_cache len validate dump dumps info result_file open model_file history_file eval_metric copy save info lr_update param_groups learning_rate compute_score isinstance zip load append open set_trace append decode size range append shape numpy range items COCOEvalCap evaluate COCO getImgIds loadRes round range len mean compute_score isinstance size OrderedDict mean repeat compute_score numpy array range len isinstance reshape size sort copy OrderedDict mean repeat compute_score numpy array range len DenseNet DenseNet DenseNet DenseNet append format range named_parameters data isinstance FloatTensor Variable zero_ avg_pool3d cuda cat PreActivationResNet PreActivationResNet PreActivationResNet PreActivationResNet PreActivationResNet PreActivationResNet ResNet ResNet ResNet ResNet ResNet ResNet ResNet ResNeXt ResNeXt ResNeXt WideResNet zeros unsqueeze save cuda exists shape resnet101 append range cat format Compose astype eval get_reader net join print repeat cpu numpy get_std get_mean len join str format imwrite print tolist len repeat mkdir get_reader enumerate get join text BeautifulSoup get_text join sub split sent_tokenize append tokenize_sentences lower ne_chunk FreqDist word_tokenize get items sorted text nlp load dump tagged_sents open SubjectTrigramTagger tokenize_sentences trained_tagger tag enumerate split text nlp next range enumerate len | # Syntax-Aware Action Targeting for Video Captioning
Code for SAAT from ["Syntax-Aware Action Targeting for Video Captioning"](http://openaccess.thecvf.com/content_CVPR_2020/papers/Zheng_Syntax-Aware_Action_Targeting_for_Video_Captioning_CVPR_2020_paper.pdf) (Accepted to CVPR 2020). The implementation is based on ["Consensus-based Sequence Training for Video Captioning"](https://github.com/mynlp/cst_captioning).
## Dependencies
* Python 3.6
* Pytorch 1.1
* CUDA 10.0
* [Microsoft COCO Caption Evaluation](https://github.com/tylin/coco-caption)
| 1,026 |
TAMU-VITA/ABD-Net | ['person re identification'] | ['ABD-Net: Attentive but Diverse Person Re-Identification'] | torchreid/regularizers/param_controller.py torchreid/datasets/veri.py torchreid/models/senet.py torchreid/eval_cylib/setup.py torchreid/utils/loggers.py torchreid/models/__init__.py torchreid/eval_metrics.py torchreid/datasets/__init__.py torchreid/losses/__init__.py torchreid/datasets/prid450s.py torchreid/datasets/cuhk01.py torchreid/regularizers/SO.py torchreid/losses/spectral_loss.py torchreid/datasets/dukemtmcvidreid.py torchreid/losses/incidence_xent_loss.py torchreid/eval_cylib/test_cython.py torchreid/models/resnetmid.py torchreid/data_manager.py torchreid/regularizers/NR.py torchreid/datasets/msmt17.py torchreid/datasets/sensereid.py torchreid/losses/batch_spectral_loss.py torchreid/dataset_loader.py torchreid/losses/incidence_loss.py torchreid/losses/of_penalty.py torchreid/losses/sa_loss.py torchreid/utils/reidtools.py torchreid/models/resnet.py torchreid/datasets/prid2011.py torchreid/models/shufflenet.py torchreid/losses/center_loss.py torchreid/models/resnext.py torchreid/samplers.py torchreid/losses/lowrank_loss.py torchreid/models/pcb.py torchreid/losses/singular_triplet_loss.py torchreid/datasets/viper.py torchreid/datasets/bases.py torchreid/models/xception.py torchreid/losses/hard_mine_triplet_loss.py torchreid/utils/environ.py torchreid/datasets/dukemtmcreid.py torchreid/datasets/ilidsvid.py torchreid/components/attention.py torchreid/__init__.py args.py torchreid/models/mlfn.py torchreid/components/dropout.py torchreid/datasets/mars.py torchreid/models/inceptionresnetv2.py torchreid/models/mobilenetv2.py eval_acc.py torchreid/models/hacnn.py torchreid/regularizers/SVMO.py torchreid/datasets/cuhk03.py torchreid/models/mudeep.py torchreid/models/squeezenet.py torchreid/utils/iotools.py torchreid/models/nasnet.py torchreid/models/inceptionv4.py train.py torchreid/optimizers.py torchreid/transforms.py torchreid/regularizers/__init__.py torchreid/datasets/valset.py torchreid/regularizers/SVDO.py torchreid/losses/cross_entropy_loss.py torchreid/models/densenet.py torchreid/datasets/ilids.py torchreid/utils/avgmeter.py torchreid/datasets/market1501_d.py torchreid/datasets/dukemtmcreid_d.py torchreid/utils/nuc_norm.py torchreid/losses/ring_loss.py torchreid/datasets/market1501.py torchreid/datasets/grid.py torchreid/components/branches.py torchreid/components/shallow_cam.py torchreid/utils/torchtools.py argument_parser image_dataset_kwargs optimizer_kwargs video_dataset_kwargs main extract_train_info accuracy get_criterions main train test get_criterion ImageDataset read_image VideoDataset ImageDataManager VideoDataManager BaseDataManager eval_market1501 evaluate_py eval_cuhk03 evaluate init_optimizer RandomIdentitySampler RandomErasing build_transforms Random2DTranslation build_training_transforms CAM_Module PAM_Module get_attention_module_instance Identity DANetHead NPBranch Sequential MultiBranchNetwork DANBranch ABDBranch GlobalBranch SimpleDropoutOptimizer DropoutOptimizer ShallowCAM BaseDataset BaseImageDataset BaseVideoDataset CUHK01 CUHK03 DukeMTMCreID DukeMTMCreID_D DukeMTMCVidReID GRID iLIDS iLIDSVID Market1501 Market1501_D Mars MSMT17 PRID2011 PRID450S SenseReID ValSet VeRi VIPeR init_imgreid_dataset init_vidreid_dataset numpy_include BatchSpectralLoss CenterLoss CrossEntropyLoss TripletLoss IncidenceLoss IncidenceXentLoss LowRankLoss OFPenalty RingLoss sa_loss SingularTripletLoss SpectralLoss DeepSupervision _copy_dense_layer DenseNet densenet121_backbone _make_densenet init_pretrained_weights densenet161_backbone DenseNetDeepBranch _DenseLayer _DenseBlock DummyFD _Transition MultiBranchDenseNet DenseNetCommonBranch HACNN InceptionB ChannelAttn SoftAttn SpatialAttn HarmAttn HardAttn InceptionA ConvBlock Block17 Block8 Mixed_6a Mixed_5b BasicConv2d InceptionResNetV2 inceptionresnetv2 Block35 Mixed_7a Mixed_4a Mixed_5a Reduction_B Inception_B init_pretrained_weights BasicConv2d Inception_A InceptionV4Base Reduction_A Mixed_3a Inception_C inceptionv4 MLFNBlock mlfn MLFN init_pretrained_weights Bottleneck ConvBlock MobileNetV2 Reduction ConvLayers MultiScaleB MultiScaleA MuDeep Fusion ConvBlock NormalCell BranchSeparablesStem AvgPoolPad ReductionCell1 ReductionCell0 MaxPoolPad init_pretrained_weights nasnetamobile SeparableConv2d BranchSeparables CellStem0 FirstCell BranchSeparablesReduction CellStem1 NASNetAMobile pcb_p6 init_pretrained_weights Bottleneck pcb_p4 conv3x3 PCB BasicBlock DimReduceLayer MultiBranchMGNLikeResNet resnet50_backbone ResNet resnet50 init_pretrained_weights ResNetDeepBranch Bottleneck ResNetMGNLikeCommonBranch ResNetCommonBranch resnet50_mgn_like conv3x3 ResNetMGNLikeDeepBranch MultiBranchResNet BasicBlock ResNet resnet50mid init_pretrained_weights Bottleneck conv3x3 BasicBlock ResNeXtBottleneck resnext50_32x4d init_pretrained_weights resnext101_32x4d ResNeXt se_resnext50_32x4d senet154 SENet SEResNetBottleneck SEBottleneck SEResNeXtBottleneck se_resnet50_fc512 init_pretrained_weights Bottleneck se_resnet152 se_resnet50 se_resnext101_32x4d SEModule se_resnet101 ShuffleNet Bottleneck ChannelShuffle SqueezeNet squeezenet1_1 squeezenet1_0_fc512 init_pretrained_weights Fire squeezenet1_0 Block init_pretrained_weights xception SeparableConv2d Xception init_model get_names NoneRegularizer ParamController HtriParamController SORegularizer SVDORegularizer SVMORegularizer ConvRegularizer get_regularizer AverageMeter get_env_or_raise check_isfile read_json save_checkpoint write_json mkdir_if_missing RankLogger Logger binv compute_error msqrt SymNucNorm nuclear_norm _apply_func NucNorm generate_symm_matrix _functional_nuc_norm visualize_ranked_results open_specified_layers set_bn_to_eval count_num_param adjust_learning_rate open_all_layers init_params add_argument ArgumentParser topk isinstance size t eq mul_ expand_as append sum max SingularLoss WrappedCrossEntropyLoss HtriParamController WrappedTripletLoss fix_custom_loss IncidenceXentLoss SpectralLoss IncidenceLoss BatchSpectralLoss label_smooth SingularTripletLoss LowRankLoss init_model extract_train_info MultiStepLR gpu_devices Logger arch save_dir cuda seed regularizer get_regularizer load_state_dict init_optimizer state_dict manual_seed_all update format start_epoch load_weights resume return_dataloaders manual_seed is_available load join print ImageDataManager get_criterions count_num_param parameters num_train_pids use_cpu eval vars TripletLoss CrossEntropyLoss save_checkpoint show_summary visualize_ranked_results fixbase_epoch round str visualize_ranks max_epoch RankLogger source_names range return_testdataset_by_name get test timedelta vars time evaluate target_names write get_criterion open_layers train step OFPenalty model of_penalty zero_grad open_specified_layers regularizer get update format size item vars float enumerate time criterion backward print AverageMeter open_all_layers open_layers step len get format evaluate print AverageMeter expand t addmm_ eval flip_eval avg savemat numpy test_batch_size convert cumsum defaultdict shape append sum range format asarray astype choice mean enumerate invert items print float32 argsort int32 zeros len invert format asarray print cumsum astype float32 argsort shape mean int32 append sum range RandomErasing print Random2DTranslation ToTensor set Resize Normalize RandomHorizontalFlip append ColorJitter get print Compose Normalize build_training_transforms lower get_include IncidenceLoss get_env_or_raise CrossEntropyLoss get norm view print size sum SingularLoss WrappedTripletLoss update list format print group load_url match load_state_dict keys compile state_dict init_pretrained_weights DenseNet init_pretrained_weights DenseNet load_url InceptionResNetV2 load_state_dict Linear InceptionV4Base init_pretrained_weights MLFN init_pretrained_weights NASNetAMobile init_pretrained_weights PCB init_pretrained_weights PCB init_pretrained_weights ResNet init_pretrained_weights resnet50_backbone resnet50_backbone ResNet init_pretrained_weights init_pretrained_weights ResNeXt init_pretrained_weights ResNeXt init_pretrained_weights SENet init_pretrained_weights SENet init_pretrained_weights SENet init_pretrained_weights SENet init_pretrained_weights SENet init_pretrained_weights SENet init_pretrained_weights SENet SqueezeNet init_pretrained_weights SqueezeNet init_pretrained_weights SqueezeNet init_pretrained_weights init_pretrained_weights Xception get checker makedirs print format isfile dirname mkdir_if_missing join copy dirname save mkdir_if_missing sqrt _sum rand bmm sqrt div repeat expand_as range stack bmm msqrt size expand permute __func view join format basename _cp_img_to print argsort shape range mkdir_if_missing isinstance Conv2d bias normal_ kaiming_normal_ modules BatchNorm1d BatchNorm2d weight constant_ Linear param_groups eval __name__ parameters train named_children isinstance print parameters eval DataParallel train module DataParallel sum module isinstance | # ABD-Net: Attentive but Diverse Person Re-Identification [](https://paperswithcode.com/sota/person-re-identification-on-msmt17?p=abd-net-attentive-but-diverse-person-re) [](https://paperswithcode.com/sota/person-re-identification-on-dukemtmc-reid?p=abd-net-attentive-but-diverse-person-re) [](https://paperswithcode.com/sota/person-re-identification-on-market-1501?p=abd-net-attentive-but-diverse-person-re) Code for this paper [ABD-Net: Attentive but Diverse Person Re-Identification](https://arxiv.org/abs/1908.01114) Tianlong Chen, Shaojin Ding\*, Jingyi Xie\*, Ye Yuan, Wuyang Chen, Yang Yang, Zhou Ren, Zhangyang Wang In ICCV 2019 Refer to **Training Guides README** [here](./README_Training_and_Testing_Guides.md), original README [here](./README_ORIG.md), datasets README [here](./DATASETS.md), Model ZOO README [here](https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO.html). We provide complete usage pretrained models for our paper. - Market1501 [best ABD-Net model](https://github.com/hsfzxjy/models.storage/releases/download/ABD-Net/market_checkpoint_best.pth.tar) - Duke [best ABD-Net model](https://github.com/hsfzxjy/models.storage/releases/download/ABD-Net/duke_checkpoint_best.pth.tar) - MSMT17 [best ABD-Net model](https://github.com/hsfzxjy/models.storage/releases/download/ABD-Net/msmt17_final_best.pth.tar) | 1,027 |
TAMU-VITA/FAT | ['person re identification'] | ['In Defense of the Triplet Loss Again: Learning Robust Person Re-Identification with Fast Approximated Triplet Loss and Label Distillation'] | argument.py pridUtils/utils.py training/functionsFAT.py datasetUtils/dataset.py datasetUtils/infoGenerator.py pridUtils/random_erasing.py script/evaluate.py models/models.py script/train.py pridUtils/re_ranking.py training/config.py training/configFAT.py datasetUtils/testDataloaders.py training/functions.py datasetUtils/datasetStat.py models/block.py script/featureExtract.py datasetUtils/trainDataloaders.py parse_args dataset_collate PRIDdataset savecsv info4test relabel_dict Market1501_info DukeMTMC_info MSMT17_info info4train_noVal info4train getDataloader RectScale getDataloader RectScale getTransforms ClassBlock_Conv ClassBlock ClassBlock_Linear weights_init_classifier weights_init_kaiming ft_densenet161 ft_resnet50 ft_densenet169 ft_densenet201 ft_resnet101 ft_resnet50mid ft_densenet121 PCB ft_resnet152 RandomErasing k_reciprocal_neigh re_ranking save_model load_model clean_file clean_dir save_parallel_model get_scores compute_mAP evaluate get_features extract fliplr train_model train_epoch train_epoch_fat train_model_fat getModel getCtrdHM getCentroids getLoss getCtrdAvgNeg getNegSetAvg getNegSetBatch getNegSetAll getGallerySet getPosSet getNegSetHM join log_dir add_argument ArgumentParser set_defaults max resume_checkpoint split array LongTensor print DataFrame to_csv clean_file iterrows savecsv append read_csv makedirs list remove print sort set int savecsv relabel_dict add set append listdir int savecsv replace relabel_dict append listdir savecsv relabel_dict info4train info4test makedirs info4train info4test makedirs PRIDdataset getTransforms data normal constant __name__ kaiming_normal data normal constant __name__ zeros_like around max exp transpose append sum range concatenate astype mean unique minimum int float32 argpartition k_reciprocal_neigh zeros len rmtree exists makedirs remove isfile print load load_state_dict print cuda save state_dict print cuda save state_dict flatten argwhere in1d zero_ range len setdiff1d argsort intersect1d argwhere append get_details print len re_ranking view evaluate print transpose write range tqdm dot numpy zero_ float mm rerank len index_select long list norm FloatTensor size tqdm div zero_ expand_as cpu cat enumerate data format model backward add_scalar print step zero_grad len tqdm cuda dataset sum max cross_entropy enumerate join save_parallel_model train_epoch train step range data getLoss model zero_grad div dataset max getNegSetHM cuda mm getNegSetBatch expand_as sum format getNegSetAvg enumerate backward add_scalar print tqdm getNegSetAll getPosSet step len getCtrdHM getCentroids join getCtrdAvgNeg save_parallel_model train_epoch_fat getGallerySet train step range ft_densenet161 ft_resnet50 ft_densenet169 ft_densenet201 ft_resnet101 ft_resnet50mid ft_densenet121 ft_resnet152 data print tqdm div eval expand_as cuda enumerate t stack len any len range zero_ len Variable stack min len mean pairwise_distance cross_entropy | # In Defense of the Triplet Loss Again: Learning Robust Person Re-Identification with Fast Approximated Triplet Loss and Label Distillation <a href="https://arxiv.org/pdf/1912.07863v1.pdf">In Defense of the Triplet Loss Again: Learning Robust Person Re-Identification with Fast Approximated Triplet Loss and Label Distillation</a> Ye Yuan, Wuyang Chen, Yang Yang, Zhangyang Wang ## Overview The comparative losses (typically, triplet loss) are appealing choices for learning person re-identification (ReID) features. However, the triplet loss is computationally much more expensive than the (practically more popular) classification loss, limiting their wider usage in massive datasets. Moreover, the abundance of label noise and outliers in ReID datasets may also put the margin-based loss in jeopardy. This work addresses the above two shortcomings of triplet loss, extending its effectiveness to large-scale ReID datasets with potentially noisy labels. We propose a fastapproximated triplet (FAT) loss, which provably converts the point-wise triplet loss into its upper bound form, consisting of a point-to-set loss term plus cluster compactness regularization. It preserves the effectiveness of triplet loss, while leading to linear complexity to the training set size. A label distillation strategy is further designed to learn refined soft-labels in place of the potentially noisy labels, from only an identified subset of confident examples, through teacher-student networks. We conduct extensive experiments on three most popular ReID benchmarks (Market1501, DukeMTMC-reID, and MSMT17), and demonstrate that FAT loss with distilled labels lead to ReID features with remarkable accuracy, efficiency, robustness, and direct transferability to unseen datasets. | 1,028 |
TDAmeritrade/stumpy | ['time series'] | ['Efficient Matrix Profile Computation Using Different Distance Functions'] | stumpy/config.py stumpy/maamp.py tests/test_aamp.py stumpy/stamp.py tests/test_gpu_aampdist.py tests/test_gpu_aamp.py stumpy/gpu_stump.py tests/test_floss.py tests/test_mpdist.py tests/test_maamp.py stumpy/mstumped.py stumpy/gpu_aamp_ostinato.py stumpy/gpu_mpdist.py tests/test_ostinato.py stumpy/aamp_ostinato.py tests/test_stimp.py docs/conf.py tests/test_stumped.py tests/test_stumpi.py tests/test_gpu_ostinato.py tests/test_gpu_stimp.py tests/test_motifs.py stumpy/mstump.py stumpy/__init__.py tests/test_scrump.py tests/test_aampdist.py stumpy/stimp.py tests/test_aamp_ostinato.py tests/test_maamped.py stumpy/maamped.py stumpy/aamp.py tests/test_gpu_aamp_ostinato.py stumpy/chains.py tests/test_scraamp.py stumpy/aampi.py stumpy/scraamp.py stumpy/mpdist.py tests/test_aampi.py tests/test_chains.py tests/test_stomp.py stumpy/ostinato.py tests/test_aamp_motifs.py conftest.py stumpy/gpu_ostinato.py stumpy/gpu_aamp.py tests/test_gpu_mpdist.py tests/test_aampdist_snippets.py stumpy/aampdist_snippets.py stumpy/stumped.py tests/test_stump.py stumpy/stumpi.py stumpy/stump.py stumpy/motifs.py stumpy/scrump.py tests/test_snippets.py stumpy/snippets.py tests/test_aamped.py stumpy/aamped.py tests/test_mstumped.py tests/test_mstump.py stumpy/stomp.py tests/test_core.py tests/test_config.py tests/naive.py docs/images/performance.py setup.py stumpy/aampdist.py stumpy/gpu_aampdist.py stumpy/core.py stumpy/floss.py tests/test_non_normalized_decorator.py tests/test_stamp.py tests/test_gpu_stump.py stumpy/aamp_motifs.py stumpy/gpu_stimp.py get_extras_require readme seconds_to_time perf_to_time _compute_diagonal _aamp aamp aampdisted aampdist _aampdist_vect aampdist_snippets _get_all_aampdist_profiles aamped aampi aamp_match _aamp_motifs aamp_motifs _get_aamp_central_motif _aamp_ostinato _aamp_across_series_nearest_neighbors aamp_ostinato aamp_ostinatoed allc atsc z_norm _welford_nanvar _idx_to_mp sliding_dot_product _compare_parameters _get_array_ranges _get_mask_slices rolling_nanmin are_arrays_equal _jagged_list_to_array _calculate_squared_distance check_nan _calculate_squared_distance_profile compute_mean_std rolling_nanmax _rolling_isfinite_1d _gpu_ostinato_driver_not_found _gpu_stump_driver_not_found replace_distance get_pkg_name mass_absolute rolling_isfinite _mass_absolute apply_exclusion_zone driver_not_found _gpu_aampdist_driver_not_found _mass_absolute_distance_matrix _get_partial_mp_func welford_nanstd non_normalized preprocess rolling_nanstd _rolling_nanmax_1d _gpu_mpdist_driver_not_found array_to_temp_file calculate_distance_profile _gpu_aamp_ostinato_driver_not_found _count_diagonal_ndist _mass_distance_matrix _get_QT check_dtype _rolling_nanmin_1d get_max_window_size _gpu_stimp_driver_not_found mass are_distances_too_small mueen_calculate_distance_profile _gpu_aamp_driver_not_found rolling_window preprocess_non_normalized check_window_size transpose_dataframe preprocess_diagonal _mass welford_nanvar _cac fluss _rea _iac _nnmark floss gpu_aamp _compute_and_update_PI_kernel _gpu_aamp gpu_aampdist gpu_aamp_ostinato gpu_mpdist gpu_ostinato gpu_stimp _gpu_stump _compute_and_update_PI_kernel gpu_stump maamp_subspace _get_first_maamp_profile _compute_multi_D _maamp_multi_distance_profile maamp _multi_mass_absolute maamp_multi_distance_profile _maamp maamped match motifs _motifs _compute_P_ABBA mpdist _mpdist_vect _select_P_ABBA_value _mpdist mpdisted _multi_distance_profile subspace _get_first_mstump_profile _compute_multi_D mstump _inverse_norm _multi_mass multi_distance_profile _compute_PI _subspace _get_multi_QT _apply_include _preprocess_include _mstump _discretize mstumped _ostinato ostinatoed ostinato _get_central_motif _across_series_nearest_neighbors _prescraamp prescraamp scraamp scrump prescrump _prescrump _get_all_profiles snippets _mass_PI stamp _stimp _bfs_indices stimped _binarize_pan stimp _contrast_pan _normalize_pan _stomp _compute_diagonal stump _stump stumped stumpi test_gpu_mpdist aampi_egress z_norm subspace aamp_across_series_nearest_neighbors get_array_ranges multi_distance_profile ostinato aampdist_snippets distance consensus_search stumpi_egress stamp aampdist_vect scrump prescrump aamp_distance_matrix maamp_subspace get_aamp_central_motif multi_mass mpdist_vect maamp mpdist aamp_consensus_search stump aampdist distance_profile apply_exclusion_zone multi_mass_absolute across_series_nearest_neighbors contrast_pan maamp_multi_distance_profile aamp_ostinato aamp_distance_profile aamp get_central_motif replace_inf distance_matrix transform_pan mstump PI mass apply_include normalize_pan mpdist_snippets get_all_mpdist_profiles _get_mask_slices binarize_pan test_aamp_self_join test_aamp_constant_subsequence_self_join test_aamp_A_B_join test_aamp_identical_subsequence_self_join test_aamp_two_constant_subsequences_A_B_join test_aamp_nan_zero_mean_self_join test_aamp_nan_inf_A_B_join test_aamp_identical_subsequence_A_B_join test_aamp_nan_inf_self_join test_aamp_int_input test_aamp_one_constant_subsequence_A_B_join test_aampdist_k test_aampdist_vect_k test_aampdist_percentage dask_cluster test_aampdisted test_aampdist test_aampdist_vect_percentage test_aampdist_vect test_aampdist_snippets test_mpdist_snippets_percentage test_mpdist_snippets_s test_two_constant_subsequences_A_B_join_swap test_aamped_self_join_df test_aamped_one_subsequence_inf_self_join test_aamped_one_subsequence_nan_self_join test_aamped_one_constant_subsequence_A_B_join test_two_constant_subsequences_A_B_join test_aamped_self_join dask_cluster test_aamped_identical_subsequence_A_B_join test_aamped_two_subsequences_nan_A_B_join test_aamped_two_subsequences_nan_inf_A_B_join_swap test_aamped_self_join_larger_window test_aamped_two_subsequences_nan_inf_A_B_join test_aamp_int_input test_aamped_one_constant_subsequence_self_join test_aamped_A_B_join test_aamped_A_B_join_df test_aamped_one_constant_subsequence_self_join_df test_aamped_one_constant_subsequence_A_B_join_df_swap test_aamped_one_constant_subsequence_A_B_join_swap test_aamped_one_subsequence_inf_A_B_join test_aamped_self_join_larger_window_df test_aamped_one_constant_subsequence_A_B_join_df test_aamped_two_subsequences_inf_A_B_join test_two_constant_subsequences_A_B_join_df test_aamped_nan_zero_mean_self_join test_constant_subsequence_A_B_join_df_swap test_aamped_one_subsequence_nan_A_B_join test_aamped_identical_subsequence_self_join test_aampi_stream_nan_inf_self_join test_aampi_int_input test_aampi_self_join_egress test_aampi_init_nan_inf_self_join test_aampi_self_join test_aampi_constant_subsequence_self_join test_aampi_init_nan_inf_self_join_egress test_aampi_constant_subsequence_self_join_egress test_aampi_identical_subsequence_self_join_egress test_aampi_profile_index_match test_aampi_identical_subsequence_self_join test_aampi_stream_nan_inf_self_join_egress test_aamp_naive_match_exact test_aamp_naive_match_exclusion_zone test_aamp_motifs_two_motifs test_aamp_match test_aamp_motifs_one_motif naive_aamp_match test_deterministic_ostinato dask_cluster test_deterministic_ostinatoed test_random_ostinatoed test_random_ostinato test_allc test_atsc test_change_excl_zone_denom test_compute_mean_std_multidimensional test_preprocess_diagonal test_get_array_ranges test_welford_nanstd test_compute_mean_std_chunked test_preprocess_non_normalized test_idx_to_mp test_compute_mean_std_multidimensional_chunked test_mueen_calculate_distance_profile test_count_diagonal_ndist test_mass_absolute_Q_inf test_compute_mean_std_multidimensional_chunked_many test_get_array_ranges_exhausted_truncated test_mass_absolute_T_inf test_mass_absolute_distance_matrix test_sliding_dot_product test_rolling_isfinite test_mass_absolute_sqrt_input_negative test_get_max_window_size test_check_dtype_float64 test_apply_exclusion_zone_multidimensional naive_rolling_window_dot_product test_compute_mean_std_chunked_many test_get_mask_slices test_compute_mean_std test_calculate_distance_profile test_check_bad_dtype test_mass_Q_nan test_welford_nanvar test_mass_absolute_Q_nan test_mass_distance_matrix naive_idx_to_mp test_rolling_nanmax_1d test_apply_exclusion_zone test_array_to_temp_file test_welford_nanvar_nan test_replace_distance test_calculate_squared_distance_profile test_mass_T_nan test_welford_nanvar_catastrophic_cancellation test_get_array_ranges_exhausted test_jagged_list_to_array test_preprocess test_rolling_nanmin test_get_array_ranges_empty_array test_rolling_nanmin_1d test_compare_parameters naive_compute_mean_std_multidimensional test_jagged_list_to_array_empty test_check_max_window_size test_rolling_nanmax test_mass_Q_inf naive_compute_mean_std test_mass_absolute_T_nan test_mass test_mass_T_inf test_mass_asbolute test_check_window_size test_cac_custom_iac naive_nnmark test_aamp_floss naive_rea naive_cac test_fluss test_aamp_floss_inf_nan naive_right_mp test_nnmark test_cac test_floss_inf_nan test_floss naive_iac test_rea test_gpu_aamp_self_join test_parallel_gpu_aamp_self_join test_gpu_aamp_nan_inf_A_B_join test_gpu_aamp_two_constant_subsequences_A_B_join test_gpu_aamp_nan_inf_self_join test_gpu_aamp_nan_zero_mean_self_join test_gpu_aamp_A_B_join test_gpu_aamp_one_constant_subsequence_A_B_join test_gpu_aamp_identical_subsequence_self_join test_gpu_aamp_constant_subsequence_self_join test_gpu_aamp_int_input test_gpu_aamp_identical_subsequence_A_B_join test_gpu_aamp_self_join_larger_window test_parallel_gpu_aamp_A_B_join test_gpu_aampdist test_random_gpu_aamp_ostinato test_deterministic_gpu_aamp_ostinato test_deterministic_gpu_ostinato test_random_gpu_ostinato test_gpu_stimp test_gpu_stump_nan_zero_mean_self_join test_gpu_stump_A_B_join test_gpu_stump_self_join_larger_window test_gpu_stump_self_join test_gpu_stump_int_input test_parallel_gpu_stump_self_join test_gpu_stump_constant_subsequence_self_join test_gpu_stump_one_constant_subsequence_A_B_join test_gpu_stump_nan_inf_self_join test_gpu_stump_two_constant_subsequences_A_B_join test_gpu_stump_identical_subsequence_A_B_join test_gpu_stump_identical_subsequence_self_join test_parallel_gpu_stump_A_B_join test_gpu_stump_nan_inf_A_B_join test_maamp_subspace_include_discords test_maamp test_naive_maamp test_maamp_subspace_discords test_maamp_subspace test_maamp_discords test_maamp_include test_multi_mass_absolute_seeded test_maamp_wrapper_include test_maamp_nan_inf_self_join_first_dimension test_get_first_maamp_profile test_maamp_include_discords test_maamp_nan_self_join_all_dimensions test_maamp_int_input test_maamp_subspace_include test_maamp_multi_distance_profile test_constant_subsequence_self_join test_maamp_wrapper test_identical_subsequence_self_join test_multi_mass_absolute dask_cluster test_maamped_one_subsequence_inf_self_join_all_dimensions test_maamped_include test_maamped_one_subsequence_nan_self_join_first_dimension test_maamped test_maamped_int_input test_maamped_df test_maamped_constant_subsequence_self_join test_maamped_identical_subsequence_self_join test_maamped_one_subsequence_inf_self_join_first_dimension test_maamped_discords test_maamped_include_discords test_maamped_one_subsequence_nan_self_join_all_dimensions test_match naive_match test_motifs_two_motifs test_motifs_one_motif test_motifs_max_matches test_naive_match_exclusion_zone test_select_P_ABBA_val_inf test_mpdist_custom_func dask_cluster test_mpdist test_mpdist_vect some_func test_mpdist_vect_k test_mpdist_percentage test_compute_P_ABBA test_mpdisted test_mpdist_k test_mpdist_vect_percentage test_mstump_nan_inf_self_join_first_dimension test_apply_include test_mstump_include_discords test_mstump_wrapper_include test_mstump_include test_mstump_nan_self_join_all_dimensions naive_rolling_window_dot_product test_subspace_discords test_subspace test_mstump_discords test_mstump_int_input test_get_multi_QT test_multi_mass_seeded test_naive_mstump test_mstump test_subspace_include_discords test_get_first_mstump_profile test_multi_distance_profile test_subspace_include test_constant_subsequence_self_join test_multi_mass test_mstump_wrapper test_identical_subsequence_self_join test_mstumped_include test_mstumped_one_subsequence_nan_self_join_first_dimension test_mstumped_int_input dask_cluster test_mstumped test_mstumped_df test_mstumped_include_discords test_mstumped_identical_subsequence_self_join test_mstumped_constant_subsequence_self_join test_mstumped_discords test_mstumped_one_subsequence_inf_self_join_all_dimensions test_mstumped_one_subsequence_inf_self_join_first_dimension test_mstumped_one_subsequence_nan_self_join_all_dimensions test_match test_gpu_stump test_snippets test_prescrump dask_cluster test_subspace test_scrump_plus_plus test_ostinato test_scrump_plus_plus_full test_scrump test_mstumped test_stump test_stumpi test_gpu_ostinato test_mstump test_stumped test_motifs test_multi_distance_profile test_mpdist test_mpdisted test_mass test_ostinatoed test_gpu_mpdist test_deterministic_ostinato dask_cluster test_deterministic_ostinatoed test_random_ostinatoed test_random_ostinato test_scraamp_plus_plus_A_B_join test_scraamp_A_B_join test_scraamp_plus_plus_A_B_join_full test_prescraamp_A_B_join_swap test_scraamp_A_B_join_full test_prescraamp_self_join test_scraamp_nan_inf_self_join test_scraamp_plus_plus_self_join test_scraamp_self_join test_scraamp_self_join_full_larger_window test_scraamp_self_join_larger_window test_scraamp_A_B_join_swap test_scraamp_A_B_join_full_swap test_scraamp_int_input test_scraamp_plus_plus_A_B_join_full_swap test_scraamp_constant_subsequence_self_join test_scraamp_identical_subsequence_self_join test_scraamp_nan_zero_mean_self_join test_prescraamp_self_join_larger_window naive_prescraamp test_prescraamp_A_B_join naive_scraamp test_scraamp_plus_plus_self_join_full test_scraamp_self_join_full test_scrump_plus_plus_A_B_join_full_swap test_scrump_plus_plus_self_join_full test_prescrump_A_B_join_swap test_scrump_int_input test_scrump_A_B_join_swap test_scrump_constant_subsequence_self_join test_scrump_self_join_larger_window test_scrump_self_join test_prescrump_A_B_join test_scrump_nan_inf_self_join test_scrump_nan_zero_mean_self_join test_scrump_A_B_join_full test_prescrump_self_join_larger_window test_scrump_self_join_full test_scrump_A_B_join test_scrump_identical_subsequence_self_join test_scrump_self_join_full_larger_window test_scrump_A_B_join_full_swap test_scrump_plus_plus_A_B_join_full test_prescrump_self_join test_scrump_plus_plus_A_B_join test_scrump_plus_plus_self_join test_mpdist_snippets test_mpdist_snippets_s test_mpdist_snippets_percentage test_get_all_mpdist_profiles_percentage test_get_all_mpdist_profiles_s test_get_all_mpdist_profiles test_stamp_nan_zero_mean_self_join test_stamp_mass_PI test_stamp_self_join test_stamp_nan_inf_A_B_join test_stamp_int_input test_stamp_nan_inf_self_join test_stamp_A_B_join test_bsf_indices dask_cluster test_stimp_1_percent test_stimp_100_percent naive_bsf_indices test_stimp_max_m test_stimped split test_stomp_A_B_join test_stomp_self_join test_stomp_nan_zero_mean_self_join test_stump_self_join_larger_window test_stomp_nan_inf_self_join test_stomp_nan_inf_A_B_join test_stomp_int_input test_stump_two_constant_subsequences_A_B_join test_stump_nan_inf_self_join test_stump_nan_zero_mean_self_join test_stump_one_constant_subsequence_A_B_join test_stump_constant_subsequence_self_join test_stump_self_join test_stump_A_B_join test_stump_nan_inf_A_B_join test_stump_identical_subsequence_A_B_join test_stump_identical_subsequence_self_join test_stump_int_input test_two_constant_subsequences_A_B_join_swap test_stumped_identical_subsequence_A_B_join test_stumped_two_subsequences_inf_A_B_join test_stumped_one_subsequence_nan_self_join test_stumped_int_input test_two_constant_subsequences_A_B_join test_stumped_one_constant_subsequence_self_join test_stumped_A_B_join_df test_stumped_self_join test_stumped_one_constant_subsequence_A_B_join_swap dask_cluster test_stumped_one_constant_subsequence_A_B_join test_stumped_one_subsequence_inf_self_join test_stumped_nan_zero_mean_self_join test_stumped_A_B_join test_stump_self_join_larger_window_df test_stumped_two_subsequences_nan_A_B_join test_two_constant_subsequences_A_B_join_df test_stumped_one_constant_subsequence_self_join_df test_stumped_self_join_df test_stumped_one_constant_subsequence_A_B_join_df_swap test_stumped_one_subsequence_nan_A_B_join test_stumped_identical_subsequence_self_join test_stump_self_join_larger_window test_stumped_one_constant_subsequence_A_B_join_df test_stumped_two_subsequences_nan_inf_A_B_join test_constant_subsequence_A_B_join_df_swap test_stumped_two_subsequences_nan_inf_A_B_join_swap test_stumped_one_subsequence_inf_A_B_join test_stumpi_stream_nan_inf_self_join_egress test_stumpi_identical_subsequence_self_join_egress test_stumpi_identical_subsequence_self_join test_stumpi_profile_index_match test_stumpi_init_nan_inf_self_join test_stumpi_constant_subsequence_self_join_egress test_stumpi_int_input test_stumpi_constant_subsequence_self_join test_stumpi_init_nan_inf_self_join_egress test_stumpi_stream_nan_inf_self_join test_stumpi_self_join_egress test_stumpi_self_join append int modf timedelta match ceil float round join print strip match seconds_to_time split abs norm min range _count_diagonal_ndist inf prange NUMBA_NUM_THREADS _get_array_ranges _compute_diagonal full range int arange STUMPY_EXCL_ZONE_DENOM preprocess_non_normalized copy warning check_window_size _aamp ceil empty int min range pad _aampdist_vect ceil empty clip arange float64 vstack argmin pad append sum range inf astype _get_all_aampdist_profiles full empty minimum min repeat check_window_size _get_mask_slices len arange STUMPY_EXCL_ZONE_DENOM _get_array_ranges where warning gather list scatter ceil append range copy empty keys enumerate submit int _count_diagonal_ndist cancel preprocess_non_normalized are_distances_too_small check_window_size len apply_exclusion_zone int argmin _jagged_list_to_array append aamp_match nanmax int inf _aamp_motifs float64 STUMPY_EXCL_ZONE_DENOM astype preprocess_non_normalized warn rolling_window nan ceil sum int apply_exclusion_zone inf STUMPY_EXCL_ZONE_DENOM argmin preprocess_non_normalized max_distance shape rolling_window ceil sum append _mass_absolute inf argmin sliding_dot_product empty any zeros sum range len _aamp_across_series_nearest_neighbors mean zip isclose flatnonzero _mass_absolute inf _get_partial_mp_func sliding_dot_product argsort empty any partial_mp_func sum max range len _get_aamp_central_motif len preprocess_non_normalized _aamp_ostinato rolling_window sum enumerate _get_aamp_central_motif len preprocess_non_normalized _aamp_ostinato rolling_window sum enumerate list size deque array range append update ones size atsc set deque append argmax range list keys warning remove _raise_driver_not_found driver_not_found driver_not_found driver_not_found driver_not_found driver_not_found driver_not_found driver_not_found asarray strides std isnan any int STUMPY_EXCL_ZONE_DENOM floor flipud convolve nanmean nanvar empty range rolling_isfinite ndim ceil int ceil int ndim ndim STUMPY_MEAN_STD_MAX_ITER inf min hstack mean rolling_window STUMPY_MEAN_STD_NUM_CHUNKS rolling_nanstd ceil range append STUMPY_DENOM_THRESHOLD abs inf prange empty _calculate_squared_distance _calculate_squared_distance_profile sum asarray inf check_dtype preprocess_non_normalized copy sliding_dot_product flatten rolling_window any empty _mass_absolute mass_absolute shape range cumsum abs square sliding_dot_product mean sqrt empty std len asarray inf compute_mean_std check_dtype copy sliding_dot_product flatten preprocess any empty _mass shape preprocess range mass sliding_dot_product min max inf asarray compute_mean_std check_dtype copy check_window_size transpose_dataframe nan asarray check_dtype copy check_window_size transpose_dataframe rolling_isfinite preprocess_non_normalized compute_mean_std NamedTemporaryFile name save zeros prange min cumsum argmin searchsorted any zeros sum diff rolling_window any isfinite copy ndim partial full max enumerate where norm z_norm all inf astype isfinite copy int64 rolling_window minimum arange astype flatten int64 bincount seed rvs arange rv_histogram reshape _nnmark pdf mean round randint empty range lstsq fit _iac zeros _nnmark deepcopy argmin min empty max range _cac _rea inf grid min gridsize max range ceil load STUMPY_THREADS_PER_BLOCK STUMPY_EXCL_ZONE_DENOM where set_start_method warning Pool apply_async ceil append sum range get close array_to_temp_file empty enumerate load int join remove isinstance _get_QT min preprocess_non_normalized are_distances_too_small rolling_window check_window_size _gpu_aamp len _get_aamp_central_motif len preprocess_non_normalized _aamp_ostinato rolling_window sum enumerate _mpdist _ostinato len _get_central_motif preprocess enumerate STUMPY_DENOM_THRESHOLD abs ceil load STUMPY_THREADS_PER_BLOCK STUMPY_EXCL_ZONE_DENOM where set_start_method _gpu_stump warning Pool apply_async ceil append range get close preprocess array_to_temp_file empty enumerate load int join remove isinstance _get_QT min are_distances_too_small check_window_size len inf shape any mass_absolute empty range _subspace norm preprocess_non_normalized apply_exclusion_zone sort _multi_mass_absolute shape _apply_include zeros range int STUMPY_EXCL_ZONE_DENOM _maamp_multi_distance_profile preprocess_non_normalized check_window_size _preprocess_include ceil inf argmin _maamp_multi_distance_profile isinf full range apply_exclusion_zone prange inf range ones _compute_multi_D sort _compute_PI copy _apply_include empty range int _get_first_maamp_profile STUMPY_EXCL_ZONE_DENOM preprocess_non_normalized _get_multi_QT shape rolling_window empty _maamp check_window_size _preprocess_include ceil sum _get_first_maamp_profile STUMPY_EXCL_ZONE_DENOM _preprocess_include gather list shape scatter append ceil sum range _get_multi_QT empty keys enumerate submit int cancel min preprocess_non_normalized rolling_window check_window_size len apply_exclusion_zone int argmin match _jagged_list_to_array append nanmax int inf _motifs float64 STUMPY_EXCL_ZONE_DENOM astype warn preprocess nan ceil int apply_exclusion_zone asarray inf STUMPY_EXCL_ZONE_DENOM argmin max_distance shape preprocess ceil sum array append _get_partial_mp_func count_nonzero int partition sort min isfinite custom_func max int min _compute_P_ABBA _select_P_ABBA_value ceil empty clip int nanmin inf min range distance_matrix_func _select_P_ABBA_value rolling_nanmin ceil empty full clip _mpdist _mpdist asarray warning unique ones copy _preprocess_include inf mass shape isinf empty range argsort in1d _preprocess_include norm z_norm _inverse_norm _subspace preprocess log2 _discretize apply_exclusion_zone sort _multi_mass shape _apply_include zeros range int _multi_distance_profile STUMPY_EXCL_ZONE_DENOM preprocess check_window_size _preprocess_include ceil _multi_distance_profile inf argmin isinf full range empty range sliding_dot_product _calculate_squared_distance_profile sqrt argmin range isinf ones _compute_multi_D sort _compute_PI copy _apply_include empty range int _get_first_mstump_profile STUMPY_EXCL_ZONE_DENOM _get_multi_QT shape preprocess check_window_size _preprocess_include _mstump ceil empty _get_first_mstump_profile STUMPY_EXCL_ZONE_DENOM _preprocess_include gather list shape scatter append ceil range _get_multi_QT preprocess empty keys enumerate submit int cancel min check_window_size len argmin sliding_dot_product zeros _mass range len mean zip _across_series_nearest_neighbors isclose flatnonzero inf _get_partial_mp_func sliding_dot_product argsort partial_mp_func max range len _ostinato len _get_central_motif preprocess enumerate _ostinato len _get_central_motif preprocess enumerate inf min argmin square max range _mass_absolute STUMPY_EXCL_ZONE_DENOM sliding_dot_product _prescraamp int64 ceil sum asarray inf astype copy sqrt nan empty int check_dtype preprocess_non_normalized rolling_window check_window_size full inf min argmin square _mass _calculate_squared_distance max range int asarray inf compute_mean_std _prescrump check_dtype STUMPY_EXCL_ZONE_DENOM astype copy sliding_dot_product sqrt int64 nan check_window_size ceil empty full int min range pad _mpdist_vect ceil empty clip arange float64 vstack argmin pad _get_all_profiles append sum range inf astype full empty minimum min repeat check_window_size _get_mask_slices len apply_exclusion_zone inf size argmin mass int copy rolling_window preprocess check_window_size nan ceil array count_nonzero arange cumsum astype int64 power empty sqrt exp reshape astype argsort shape int64 nan linspace empty full where int apply_exclusion_zone inf _calculate_squared_distance_profile _mass_PI STUMPY_EXCL_ZONE_DENOM argmin sliding_dot_product are_distances_too_small sqrt preprocess warning check_window_size item isinf ceil empty range dot _count_diagonal_ndist abs prange NUMBA_NUM_THREADS _get_array_ranges sqrt _compute_diagonal empty full range int arange STUMPY_EXCL_ZONE_DENOM preprocess_diagonal _stump are_distances_too_small warning check_window_size ceil empty arange STUMPY_EXCL_ZONE_DENOM _get_array_ranges where warning gather list scatter ceil append range empty keys enumerate submit int _count_diagonal_ndist cancel preprocess_diagonal are_distances_too_small check_window_size len mpdist assert_almost_equal gpu_mpdist range norm z_norm copy rolling_window any nan isinf norm copy rolling_window any nan isinf array cdist rolling_window nan apply_exclusion_zone min argmin distance_profile max range inf min copy full array range asarray inf cdist min rolling_window nan full range inf copy shape any nan distance_profile isinf empty range inf copy shape any nan aamp_distance_profile isinf empty range inf ones shape full range ones copy append array range apply_exclusion_zone multi_mass sort shape apply_include zeros range inf ones multi_distance_profile PI copy full range apply_exclusion_zone multi_mass_absolute sort shape apply_include zeros range inf ones PI copy maamp_multi_distance_profile full range ones sort argsort distance flatten append range ones sort argsort distance flatten append range zeros int64 min range argmin distance_profile zeros range len across_series_nearest_neighbors mean zip isclose flatnonzero inf argmin maximum stump zeros range len get_central_motif consensus_search argmin aamp_distance_profile zeros range len aamp_across_series_nearest_neighbors mean zip isclose flatnonzero inf argmin maximum aamp zeros range len aamp_consensus_search get_aamp_central_motif int sort min ceil empty max range int sort min ceil empty max range int sum sort min ceil empty max int sum sort min ceil empty max int min mpdist_vect_func pad ceil empty max range minimum sum inf arange min astype pad empty append get_all_mpdist_profiles _get_mask_slices range get_all_mpdist_profiles apply_exclusion_zone permutation argmin min distance_matrix max range int _count_diagonal_ndist inf astype _get_array_ranges int64 ceil distance_matrix full range sqrt range exp reshape argsort shape nan linspace empty full range range nanmax sort contrast_pan repeat nan normalize_pan append diff clip binarize_pan append len diff enumerate replace_inf assert_almost_equal Series aamp replace_inf assert_almost_equal Series aamp concatenate Series aamp replace_inf assert_almost_equal concatenate Series rand aamp replace_inf assert_almost_equal concatenate Series aamp replace_inf assert_almost_equal Series rand aamp replace_inf assert_almost_equal Series STUMPY_TEST_PRECISION rand aamp replace_inf assert_almost_equal Series copy aamp replace_inf assert_almost_equal Series copy aamp replace_inf assert_almost_equal replace_inf assert_almost_equal array aamp close LocalCluster assert_almost_equal aampdist_vect _aampdist_vect assert_almost_equal aampdist_vect _aampdist_vect assert_almost_equal aampdist_vect _aampdist_vect assert_almost_equal aampdist assert_almost_equal aampdist assert_almost_equal aampdist assert_almost_equal aampdist_snippets assert_almost_equal aampdist_snippets assert_almost_equal aampdist_snippets P_ I_ T_ rand seed shape mass_absolute range update inf left_P_ aamp replace_inf assert_almost_equal enumerate left_I_ Series aampi randint full seed update aampi_egress I_ replace_inf Series rand copy aampi randint assert_almost_equal range left_I_ seed update P_ I_ replace_inf T_ rand Series aampi aamp randint assert_almost_equal range seed update aampi_egress I_ replace_inf Series rand copy aampi randint assert_almost_equal range left_I_ seed update P_ I_ replace_inf T_ rand Series aampi aamp randint assert_almost_equal seed update aampi_egress I_ replace_inf Series rand copy aampi randint assert_almost_equal left_I_ seed update P_ concatenate replace_inf Series T_ rand aampi aamp randint assert_almost_equal range seed update aampi_egress concatenate replace_inf Series rand copy aampi randint assert_almost_equal range seed update P_ replace_inf Series T_ rand aampi aamp randint assert_almost_equal range seed update aampi_egress replace_inf Series rand copy aampi randint assert_almost_equal range update P_ inf rand left_P_ aampi copy flatten shape rolling_window distance assert_almost_equal full range len sort size aamp_distance_profile append range aamp_motifs aamp assert_array_equal assert_almost_equal array seed normal arange aamp_motifs aamp assert_almost_equal int list sort naive_aamp_match ceil assert_almost_equal array list sort naive_aamp_match assert_almost_equal array int naive_aamp_match ceil assert_almost_equal aamp_match seed aamp_ostinato assert_almost_equal seed aamp_ostinato assert_almost_equal int64 atsc array assert_equal sorted allc array assert_equal dot zeros range len inf copy mean nanstd nan zeros range cumsum square shape empty sqrt abs z_norm all inf astype isfinite copy distance int64 rolling_window int STUMPY_EXCL_ZONE_DENOM get_max_window_size floor range range range assert_almost_equal sliding_dot_product naive_rolling_window_dot_product nanvar rand rolling_window assert_almost_equal welford_nanvar nanvar rolling_window assert_almost_equal array welford_nanvar nanvar rand rolling_window nan assert_almost_equal welford_nanvar rand welford_nanstd rolling_window nanstd assert_almost_equal nanmin rand _rolling_nanmin_1d assert_almost_equal range nanmin rand rolling_window rolling_nanmin assert_almost_equal range nanmax rand _rolling_nanmax_1d assert_almost_equal range nanmax rolling_nanmax rand rolling_window assert_almost_equal range naive_compute_mean_std assert_almost_equal compute_mean_std naive_compute_mean_std assert_almost_equal compute_mean_std naive_compute_mean_std assert_almost_equal compute_mean_std assert_almost_equal array compute_mean_std naive_compute_mean_std_multidimensional assert_almost_equal array compute_mean_std naive_compute_mean_std_multidimensional assert_almost_equal array compute_mean_std naive_compute_mean_std_multidimensional norm z_norm _calculate_squared_distance_profile compute_mean_std sliding_dot_product rolling_window item assert_almost_equal norm z_norm compute_mean_std sliding_dot_product rolling_window item assert_almost_equal calculate_distance_profile norm z_norm mueen_calculate_distance_profile rolling_window assert_almost_equal norm z_norm mass copy rolling_window assert_almost_equal norm z_norm inf mass copy rolling_window nan assert_almost_equal norm z_norm inf mass copy rolling_window assert_almost_equal norm z_norm inf mass copy rolling_window nan assert_almost_equal norm z_norm inf mass copy rolling_window assert_almost_equal norm copy rolling_window mass_absolute assert_almost_equal norm inf copy rolling_window nan mass_absolute assert_almost_equal norm inf copy rolling_window mass_absolute assert_almost_equal norm inf copy rolling_window nan mass_absolute assert_almost_equal norm inf copy rolling_window mass_absolute assert_almost_equal mass_absolute assert_almost_equal array _mass_distance_matrix inf distance_matrix assert_almost_equal full inf _mass_absolute_distance_matrix cdist rolling_window assert_almost_equal full apply_exclusion_zone shape assert_array_equal replace_inf empty array range apply_exclusion_zone shape assert_array_equal replace_inf empty array range Series naive_compute_mean_std preprocess assert_almost_equal array all Series isfinite preprocess_non_normalized array assert_almost_equal full range Series preprocess_diagonal naive_compute_mean_std assert_almost_equal array fill_diagonal inf reshape copy STUMPY_MAX_DISTANCE replace_distance load remove rand array_to_temp_file assert_almost_equal sum _count_diagonal_ndist ones astype int64 assert_almost_equal empty range enumerate len get_array_ranges _get_array_ranges assert_almost_equal array range get_array_ranges array assert_almost_equal _get_array_ranges get_array_ranges array assert_almost_equal _get_array_ranges get_array_ranges array assert_almost_equal _get_array_ranges all float64 astype isfinite rolling_window nan assert_almost_equal rolling_isfinite assert_array_equal array _jagged_list_to_array assert_array_equal array _jagged_list_to_array assert_array_equal _get_mask_slices _idx_to_mp rand randint assert_almost_equal naive_idx_to_mp zeros range arange minimum zeros naive_nnmark naive_iac norm z_norm inf flatten rolling_window stump aamp zeros tolist min index max range len assert_almost_equal naive_nnmark _nnmark _iac naive_cac assert_almost_equal _cac naive_cac assert_almost_equal _cac naive_iac naive_cac assert_almost_equal _rea naive_rea fluss naive_rea naive_cac assert_almost_equal naive_iac P_ I_ T_ roll flatten max _cac _iac naive_right_mp cac_1d_ uniform stump floss distance_profile ceil update inf copy replace_inf assert_almost_equal enumerate int rolling_window P_ I_ T_ roll flatten max _cac _iac naive_right_mp cac_1d_ uniform floss ceil update inf copy aamp aamp_distance_profile replace_inf assert_almost_equal enumerate int rolling_window P_ I_ T_ roll flatten max _cac all _iac naive_right_mp cac_1d_ uniform stump floss distance_profile ceil update inf isfinite copy replace_inf assert_almost_equal enumerate int rolling_window P_ I_ T_ roll flatten max _cac all _iac naive_right_mp cac_1d_ uniform floss ceil update inf isfinite copy aamp aamp_distance_profile replace_inf assert_almost_equal enumerate int rolling_window int Series gpu_aamp aamp ceil replace_inf assert_almost_equal int gpu_aamp aamp ceil replace_inf assert_almost_equal replace_inf gpu_aamp assert_almost_equal aamp int gpu_aamp aamp ceil replace_inf assert_almost_equal replace_inf gpu_aamp assert_almost_equal aamp int concatenate gpu_aamp aamp ceil replace_inf assert_almost_equal concatenate rand gpu_aamp aamp replace_inf assert_almost_equal concatenate gpu_aamp aamp replace_inf assert_almost_equal array int rand gpu_aamp aamp ceil replace_inf assert_almost_equal rand gpu_aamp aamp replace_inf assert_almost_equal int gpu_aamp aamp ceil replace_inf assert_almost_equal copy gpu_aamp aamp replace_inf assert_almost_equal int gpu_aamp aamp ceil replace_inf assert_almost_equal array gpu_aampdist assert_almost_equal aampdist seed aamp_ostinato gpu_aamp_ostinato assert_almost_equal seed aamp_ostinato gpu_aamp_ostinato assert_almost_equal seed assert_almost_equal gpu_ostinato ostinato seed assert_almost_equal gpu_ostinato ostinato update int _PAN _bfs_indices _n_processed gpu_stimp PAN_ stump _M transform_pan ceil replace_inf assert_almost_equal full range enumerate int Series gpu_stump ceil replace_inf assert_almost_equal stamp int gpu_stump ceil replace_inf assert_almost_equal stamp replace_inf assert_almost_equal stamp gpu_stump int gpu_stump ceil replace_inf assert_almost_equal stamp replace_inf assert_almost_equal stamp gpu_stump int concatenate gpu_stump ceil replace_inf assert_almost_equal stamp concatenate rand gpu_stump replace_inf assert_almost_equal stamp concatenate gpu_stump replace_inf assert_almost_equal stamp array int rand gpu_stump ceil replace_inf assert_almost_equal stamp rand gpu_stump replace_inf assert_almost_equal stamp int gpu_stump ceil replace_inf assert_almost_equal stamp copy gpu_stump replace_inf assert_almost_equal stamp int gpu_stump ceil replace_inf assert_almost_equal stamp array seed multi_mass_absolute float64 astype preprocess_non_normalized _multi_mass_absolute assert_almost_equal assert_almost_equal multi_mass_absolute preprocess_non_normalized _multi_mass_absolute maamp_multi_distance_profile assert_almost_equal range preprocess_non_normalized int _get_first_maamp_profile preprocess_non_normalized maamp assert_equal ceil assert_almost_equal assert_almost_equal range maamp_subspace assert_almost_equal asarray range maamp_subspace assert_almost_equal range maamp_subspace assert_almost_equal asarray range maamp_subspace int float64 astype maamp aamp ceil assert_almost_equal ceil int assert_almost_equal maamp int asarray maamp ceil assert_almost_equal range ceil int assert_almost_equal maamp int asarray maamp ceil assert_almost_equal range int T maamp ceil assert_almost_equal DataFrame int T asarray maamp ceil assert_almost_equal DataFrame range int concatenate maamp ceil assert_almost_equal array int rand maamp ceil assert_almost_equal array int maamp copy ceil assert_almost_equal int maamp copy ceil assert_almost_equal sort size distance_profile append range stump assert_array_equal assert_almost_equal motifs array seed normal arange stump assert_almost_equal motifs motifs stump array assert_almost_equal int list naive_match sort ceil assert_almost_equal array int naive_match match ceil assert_almost_equal count_nonzero sort min isfinite ceil max assert_almost_equal empty _compute_P_ABBA assert_almost_equal _mpdist_vect mpdist_vect assert_almost_equal _mpdist_vect mpdist_vect assert_almost_equal _mpdist_vect mpdist_vect mpdist assert_almost_equal mpdist assert_almost_equal mpdist assert_almost_equal inf sort rand copy _select_P_ABBA_value assert_almost_equal mpdist _mpdist partial assert_almost_equal asarray float64 astype shape apply_include _apply_include assert_almost_equal empty range seed compute_mean_std multi_mass float64 astype _multi_mass assert_almost_equal multi_mass assert_almost_equal _multi_mass compute_mean_std assert_almost_equal multi_distance_profile range compute_mean_std int compute_mean_std mstump _get_first_mstump_profile assert_equal ceil assert_almost_equal naive_rolling_window_dot_product _get_multi_QT rolling_window assert_almost_equal empty range assert_almost_equal range subspace assert_almost_equal asarray range subspace assert_almost_equal range subspace assert_almost_equal asarray range subspace int mstump float64 astype ceil assert_almost_equal stamp ceil int assert_almost_equal mstump int asarray mstump ceil assert_almost_equal range ceil int assert_almost_equal mstump int asarray mstump ceil assert_almost_equal range int T mstump ceil assert_almost_equal DataFrame int T asarray mstump ceil assert_almost_equal DataFrame range mstump mstump int mstump copy ceil assert_almost_equal int mstump copy ceil assert_almost_equal rand preprocess mass_absolute assert_almost_equal stump aamp copy prescraamp assert_almost_equal prescrump copy seed P_ update copy scraamp randint assert_almost_equal range scrump seed P_ update copy scraamp randint assert_almost_equal range scrump seed P_ update copy scraamp randint assert_almost_equal range scrump copy copy skip gpu_aamp gpu_stump assert_almost_equal update P_ stumpi rand aampi copy assert_almost_equal range aamp_ostinato assert_almost_equal ostinato assert_almost_equal gpu_aamp_ostinato gpu_ostinato skip float64 astype aampdist float64 astype gpu_aampdist float64 astype skip maamp_multi_distance_profile maamp maamp_subspace copy aamp_motifs aamp assert_almost_equal motifs copy aamp_match assert_almost_equal rand snippets aampdist_snippets ostinato ostinato apply_exclusion_zone permutation inf aamp_distance_matrix argmin min empty max range int _count_diagonal_ndist inf aamp_distance_matrix astype _get_array_ranges int64 ceil full range seed int naive_prescraamp ceil randint assert_almost_equal prescraamp range seed int naive_prescraamp ceil randint assert_almost_equal prescraamp range seed int naive_prescraamp ceil randint assert_almost_equal prescraamp range seed int naive_prescraamp ceil randint assert_almost_equal prescraamp range seed int update P_ I_ right_I_ replace_inf naive_scraamp scraamp ceil randint assert_almost_equal left_I_ seed update P_ I_ right_I_ replace_inf scraamp naive_scraamp randint assert_almost_equal left_I_ seed update P_ right_I_ replace_inf scraamp naive_scraamp randint assert_almost_equal left_I_ seed int update P_ I_ right_I_ replace_inf naive_scraamp scraamp ceil randint assert_almost_equal left_I_ update int P_ I_ right_I_ scraamp aamp ceil replace_inf assert_almost_equal left_I_ update P_ I_ right_I_ scraamp aamp replace_inf assert_almost_equal left_I_ update P_ I_ right_I_ scraamp aamp replace_inf assert_almost_equal left_I_ update int P_ I_ right_I_ scraamp aamp ceil replace_inf assert_almost_equal left_I_ seed int update P_ I_ replace_inf naive_scraamp naive_prescraamp scraamp ceil randint assert_almost_equal range seed int update P_ I_ right_I_ replace_inf naive_scraamp naive_prescraamp scraamp ceil randint assert_almost_equal range left_I_ update int P_ I_ right_I_ scraamp aamp ceil replace_inf assert_almost_equal left_I_ update int P_ I_ right_I_ scraamp aamp ceil replace_inf assert_almost_equal left_I_ update int P_ I_ right_I_ scraamp aamp ceil replace_inf assert_almost_equal left_I_ seed int update P_ concatenate replace_inf naive_scraamp scraamp ceil randint assert_almost_equal seed int update P_ replace_inf rand naive_scraamp scraamp ceil randint assert_almost_equal seed int update P_ I_ right_I_ replace_inf naive_scraamp copy scraamp ceil randint assert_almost_equal left_I_ seed int update P_ I_ right_I_ replace_inf naive_scraamp scraamp ceil randint assert_almost_equal array left_I_ seed int prescrump ceil randint assert_almost_equal range seed int prescrump ceil randint assert_almost_equal range seed int prescrump ceil randint assert_almost_equal range seed int prescrump ceil randint assert_almost_equal range seed int update P_ I_ right_I_ replace_inf assert_almost_equal ceil randint left_I_ scrump seed update P_ I_ right_I_ replace_inf assert_almost_equal randint left_I_ scrump seed update P_ right_I_ replace_inf assert_almost_equal randint left_I_ scrump seed int update P_ I_ right_I_ replace_inf assert_almost_equal ceil randint left_I_ scrump update int P_ I_ right_I_ stump assert_almost_equal ceil replace_inf left_I_ stamp scrump update P_ I_ right_I_ stump assert_almost_equal replace_inf left_I_ stamp scrump update P_ I_ right_I_ assert_almost_equal replace_inf left_I_ stamp scrump update int P_ I_ right_I_ assert_almost_equal ceil replace_inf left_I_ stamp scrump seed int update P_ prescrump I_ replace_inf ceil randint assert_almost_equal range scrump seed int update P_ prescrump I_ right_I_ replace_inf assert_almost_equal ceil randint left_I_ range scrump update int P_ I_ right_I_ assert_almost_equal ceil replace_inf left_I_ stamp scrump update int P_ I_ right_I_ assert_almost_equal ceil replace_inf left_I_ stamp scrump update int P_ I_ right_I_ assert_almost_equal ceil replace_inf left_I_ stamp scrump seed int update P_ concatenate I_ right_I_ replace_inf assert_almost_equal ceil randint left_I_ scrump seed int update P_ replace_inf rand ceil randint assert_almost_equal scrump seed int update P_ I_ right_I_ replace_inf copy assert_almost_equal ceil randint left_I_ scrump seed int update P_ I_ right_I_ replace_inf assert_almost_equal ceil randint left_I_ array scrump _get_all_profiles get_all_mpdist_profiles assert_almost_equal _get_all_profiles get_all_mpdist_profiles assert_almost_equal _get_all_profiles get_all_mpdist_profiles assert_almost_equal mpdist_snippets assert_almost_equal snippets mpdist_snippets snippets mpdist_snippets snippets int compute_mean_std _mass_PI mass ceil assert_almost_equal int ceil replace_inf assert_almost_equal stamp replace_inf assert_almost_equal stamp int copy ceil replace_inf assert_almost_equal stamp replace_inf assert_almost_equal stamp copy int ceil replace_inf assert_almost_equal stamp array append arange split append len list _bfs_indices naive_bsf_indices assert_almost_equal array _PAN scrump seed prescrump ceil range update _bfs_indices transform_pan replace_inf assert_almost_equal _n_processed enumerate int PAN_ _M stimp randint full _PAN scrump seed prescrump ceil range update _bfs_indices transform_pan replace_inf assert_almost_equal _n_processed enumerate int PAN_ _M stimp randint full update int _PAN _bfs_indices _n_processed PAN_ stump _M stimp transform_pan ceil replace_inf assert_almost_equal full range enumerate int _stomp ceil replace_inf assert_almost_equal stamp int _stomp ceil replace_inf assert_almost_equal stamp replace_inf assert_almost_equal stamp _stomp int copy _stomp ceil replace_inf assert_almost_equal stamp copy _stomp replace_inf assert_almost_equal stamp int _stomp ceil replace_inf assert_almost_equal stamp array int Series stump ceil replace_inf assert_almost_equal replace_inf assert_almost_equal stump Series int concatenate Series stump ceil replace_inf assert_almost_equal concatenate Series rand stump replace_inf assert_almost_equal stamp concatenate Series stump replace_inf assert_almost_equal stamp int Series rand stump ceil replace_inf assert_almost_equal stamp Series rand stump replace_inf assert_almost_equal stamp int Series copy stump ceil replace_inf assert_almost_equal stamp Series copy stump replace_inf assert_almost_equal stamp int stump ceil replace_inf assert_almost_equal stamp array P_ I_ T_ rand stamp seed shape ceil range update stumpi inf left_P_ replace_inf assert_almost_equal empty enumerate left_I_ int Series mass randint seed update stumpi I_ replace_inf Series rand copy stumpi_egress randint assert_almost_equal range left_I_ seed int update P_ stumpi I_ replace_inf T_ rand Series ceil randint assert_almost_equal stamp range seed update stumpi I_ replace_inf Series rand copy stumpi_egress randint assert_almost_equal range left_I_ seed int update P_ stumpi I_ replace_inf T_ rand Series ceil randint assert_almost_equal stamp seed update stumpi I_ replace_inf rand copy stumpi_egress randint assert_almost_equal left_I_ seed int update P_ stumpi concatenate replace_inf Series T_ rand ceil randint assert_almost_equal stamp range seed update stumpi concatenate replace_inf Series rand copy stumpi_egress randint assert_almost_equal range seed int update P_ stumpi replace_inf Series T_ rand ceil randint assert_almost_equal stamp range seed update stumpi replace_inf Series rand copy stumpi_egress randint assert_almost_equal range update P_ z_norm stumpi inf rand left_P_ copy flatten shape rolling_window distance assert_almost_equal full range len | TDAmeritrade/stumpy | 1,029 |
TIA-Lab/PanNuke-metrics | ['selection bias', 'whole slide images'] | ['PanNuke Dataset Extension, Insights and Baselines'] | utils.py run.py main remap_label get_fast_pq binarize get_tissue_idx load join remap_label format binarize print DataFrame astype to_csv nanmean mkdir nan append get_fast_pq range list uint8 linear_sum_assignment copy nonzero unique append zeros sum array len list remove sorted shape int32 unique zip append zeros sum enumerate remove tolist astype unique zeros range range len | # PanNuke Evaluation Metrics This repository calculates metrics on the PanNuke dataset, as reported in: <br /> **"PanNuke Dataset Extension, Insights and Baselines"** <br /> The PanNuke dataset can be downloaded [here](https://warwick.ac.uk/fac/sci/dcs/research/tia/data/pannuke). <br /> In the repository, the metrics that are calculated are: <br /> - **Binary PQ (bPQ)**: Assumes all nuclei belong to same class and reports the average PQ across tissue types. <br /> - **Multi-Class PQ (mPQ)**: Reports the average PQ across the classes and tissue types. <br /> - **Neoplastic PQ**: Reports the PQ for the neoplastic class on all tissues. <br /> - **Non-Neoplastic PQ**: Reports the PQ for the non-neoplastic class on all tissues. <br /> - **Inflammatory PQ**: Reports the PQ for the inflammatory class on all tissues. <br /> | 1,030 |
TKouyama/DeepUnet_Keras | ['semantic segmentation'] | ['DeepUNet: A Deep Fully Convolutional Network for Pixel-level Sea-Land Segmentation'] | unet.py unet_deep_modified_se.py train_and_predict_aug.py unet_deep_modified.py normalize_y train_unet normalize_x dice_coef_loss denormalize_x load_Y bce_dice_loss load_X denormalize_y dice_coef predict UNet UNet UNet int normalize_x sort sep float32 resize zeros imread listdir enumerate int normalize_y sort sep float32 IMREAD_GRAYSCALE resize zeros imread listdir enumerate flatten sum binary_crossentropy dice_coef_loss get_model print load_Y UNet fit_generator save_weights load_X flow ImageDataGenerator sep compile str imwrite sep denormalize_x UNet load_X load_weights denormalize_y get_model imread enumerate | # DeepUnet_Keras_SE Keras implementation of Deep Unet with SE block DeepUnet paper: - https://arxiv.org/abs/1709.00201 Basic structure of Unet in this implementation is based on - http://ni4muraano.hatenablog.com/entry/2017/08/10/101053 Requirement: - tensorflow-gpu > 1.13.2 eg. pip install tensorflow-gpu==1.13.2 (for 1.13.2 CUDA = 10.4, CuDNN = 7.4) - If tensorflow-gpu > 2.0, small corrections are needed. | 1,031 |
TLMichael/Acc-SZOFW | ['adversarial attack'] | ['Accelerated Stochastic Gradient-free and Projection-free Methods'] | app/methods/zo_sfw.py app/methods/acc_szofw.py app/run.py app/utils.py app/methods/zscg.py app/methods/acc_szofwp.py app/datasets.py app/query_model.py CovtypeBinary A9A W8A get_dataset Phishing unison_shuffled_copies count print1 QueryModel main euclidean_proj_simplex load_default DataFetcher batch_est_grad P_l1 batch_compute_loss euclidean_proj_l1ball LMO_L1 AccSZOFW AccSZOFWP ZOSFW ZSCG seed permutation len A9A W8A Phishing CovtypeBinary format print GauGE item CooGE UniGE AccSZOFWP QueryModel ArgumentParser dataset AccSZOFW get_dataset pprint attack parse_args num_features estimator ZOSFW ZSCG vars join result_path Q print method items setattr size range func DataLoader next func iter abs argmax zeros_like sign shape euclidean_proj_l1ball reshape cumsum shape float clip euclidean_proj_simplex shape abs | # Accelerated Stochastic Gradient-free and Projection-free Methods PyTorch Code for "Accelerated Stochastic Gradient-free and Projection-free Methods". ## Prerequisites - Python 3.7 - PyTorch 1.3.0 - tensorflow 1.15.2 - tqdm - pandas - Pillow - scikit-learn | 1,032 |
TManzini/DebiasMulticlassWordEmbedding | ['word embeddings'] | ['Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them', 'Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings'] | Downstream/modelUtil.py Debiasing/loader.py Debiasing/neighborAnalysis.py Debiasing/util.py Downstream/featurize.py Downstream/models/POSTagger.py Debiasing/biasOps.py Downstream/DataLoader.py Debiasing/word2vec.py Downstream/DataBatch.py Debiasing/evalBias.py Debiasing/debias.py Downstream/EvalBiasPipeline.py Downstream/vocab.py normalize equalize_and_soften calculateDirectBias identify_bias_subspace equalize_and_soften_old project_onto_subspace neutralize_and_equalize multiclass_evaluation scoredAnalogyAnswers generateAnalogies_parallelogram _unary_s generateAnalogies load_eval_terms load_test_terms load_analogy_templates load_def_set_pairs load_def_sets plot_multi get_most_biased plot_binary parse_arguments main removeWords evalTerms load_w2v writeGroupAnalogies listContainsMultiple load_legacy_w2v writeAnalogies pruneWordVecs load_legacy_w2v_as_keyvecs convert_legacy_to_keyvec write_w2v isValidWord preprocessWordVecs SeqDataBatch loadNERDatasetXY cleanLine biasEvalPipeline formatBatchedExamples constructEmbeddingTensorFromVocabAndWvs getExampleSubset precisionRecallEval train_step test Vocab POSTagger normalize equalize_and_soften calculateDirectBias identify_bias_subspace equalize_and_soften_old project_onto_subspace neutralize_and_equalize multiclass_evaluation scoredAnalogyAnswers generateAnalogies_parallelogram _unary_s generateAnalogies load_eval_terms load_test_terms load_analogy_templates load_def_set_pairs load_def_sets plot_multi get_most_biased plot_binary parse_arguments main removeWords evalTerms load_w2v writeGroupAnalogies listContainsMultiple load_legacy_w2v writeAnalogies pruneWordVecs load_legacy_w2v_as_keyvecs convert_legacy_to_keyvec write_w2v isValidWord preprocessWordVecs SeqDataBatch loadNERDatasetXY cleanLine biasEvalPipeline formatBatchedExamples constructEmbeddingTensorFromVocabAndWvs getExampleSubset precisionRecallEval train_step test Vocab POSTagger items norm items concatenate PCA mean append array fit zeros_like norm normalize square copy sqrt append project_onto_subspace expand_dims sum zeros svd norm str view backward print step zero_grad SGD range t flatten zip float mm diag enumerate str norm view backward print step zero_grad SGD t flatten zip float mm range enumerate items most_similar sorted str append items sorted str scoredAnalogyAnswers append append mean _unary_s add_argument ArgumentParser items sorted list reversed dot vocab_path removeWords add_subplot verbose tick_params pearsonr spearmanr values open show sorted list load_legacy_w2v ylim scatter append bias_specific format get_most_biased biased_embeddings reversed targets mean preprocessWordVecs debiased_embeddings annotate keys enumerate load items norm print text extend dot figure zeros load_def_sets len vocab_path removeWords verbose pearsonr spearmanr values open show subplot sorted list load_legacy_w2v title scatter append bias_specific format get_most_biased biased_embeddings reversed targets preprocessWordVecs debiased_embeddings keys enumerate load items norm print extend dot figure load_def_sets len multi parse_arguments plot_multi plot_binary items Word2VecKeyedVectors add append len load_word2vec_format len str close write open str close write open str norm format print project_onto_subspace items ascii_lowercase norm items set sorted close open zip append split precisionRecallEval save device open str train_step ttest_rel RMSprop add Vocab load_state_dict append sum CrossEntropyLoss SeqDataBatch state_dict range format readlines close test POSTagger addXY lower zip keys setEmbeddings flush enumerate load padBatch print getNumericXY write getExampleSubset parameters filter constructEmbeddingTensorFromVocabAndWvs split loadNERDatasetXY len append long getIdx range len append zip format criterion model backward print train zero_grad item append to step enumerate len criterion model eval item append to enumerate len f1_score concatenate recall_score shape eval precision_score round append zeros to numpy enumerate | # Debiasing Multiclass Word Embeddings This repository contains the code that was written in support of the NAACL 2019 paper [Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings](https://arxiv.org/abs/1904.04047). The repository has three main components. 1. Performing debiasing and MAC score calculations (./Debiasing/debias.py) 2. Cluster bias analysis (./Debiasing/neighborAnalysis.py) 3. Downstream evaluation (./Downstream/BiasEvalPipelineRunner.ipynb) ## Debiasing & MAC In order to run these files several data files need to be downloaded. If you would like to replicate our results from scratch, you must download the following files. | 1,033 |
TUI-NICR/ESANet | ['semantic segmentation'] | ['Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis'] | src/confusion_matrix.py imagenet_pretraining.py src/logger.py src/models/model_utils.py src/datasets/scenenetrgbd/pytorch_dataset.py src/build_model.py src/models/model.py inference_dataset.py src/models/external_code/from_pytorch_cv.py src/datasets/nyuv2/pytorch_dataset.py src/datasets/__init__.py src/preprocessing.py train.py src/__init__.py src/datasets/cityscapes/prepare_dataset.py src/models/model_one_modality.py src/datasets/sunrgbd/sunrgbd.py src/datasets/scenenetrgbd/prepare_dataset.py src/datasets/scenenetrgbd/scenenetrgbd.py inference_samples.py src/datasets/scenenetrgbd/__init__.py src/models/resnet.py src/datasets/dataset_base.py model_to_onnx.py src/datasets/nyuv2/__init__.py src/args.py src/datasets/cityscapes/__init__.py src/utils.py src/datasets/cityscapes/pytorch_dataset.py inference_time_whole_model.py src/datasets/sunrgbd/__init__.py src/datasets/scenenetrgbd/scenenet_pb2.py src/prepare_data.py eval.py src/models/rgb_depth_fusion.py src/datasets/sunrgbd/prepare_dataset.py src/datasets/cityscapes/cityscapes.py src/models/context_modules.py src/datasets/nyuv2/nyuv2.py src/datasets/nyuv2/prepare_dataset.py src/datasets/sunrgbd/pytorch_dataset.py validate build_model AverageMeter accuracy preprocess_validation_image get_data ProgressMeter preprocess_training_image adjust_learning_rate main train save_ckpt parse_args _load_img time_inference_pytorch _parse_args color_label_from_numpy_array alloc_buf time_inference_tensorrt get_engine time_inference_onnxruntime train_main validate train_one_epoch parse_args get_optimizer ArgumentParserRGBDSegmentation build_model iou_pytorch miou_pytorch ConfusionMatrixPytorch ConfusionMatrixTensorflow CSVLogger prepare_data RandomFlip get_preprocessor ToTensor RandomHSV RandomCrop Normalize MultiScaleLabel RandomRescale Rescale load_ckpt CrossEntropyLoss2d get_best_checkpoint CrossEntropyLoss2dForValidDataUnweighted print_log CrossEntropyLoss2dForValidData save_ckpt_every_epoch save_ckpt DatasetBase CityscapesBase save_indexed_png get_basename get_files_by_extension get_identifier Cityscapes _get_colormap NYUv2Base save_indexed_png download_file DownloadProgressBar dimshuffle NYUv2 _output_path _source_path _mapping_for_instances save_indexed_png get_identifier SceneNetRGBD SceneNetRGBDBase _write_list_to_file download_file DownloadProgressBar SUNRGBD SUNRBDBase PyramidPoolingModule get_context_module AdaptivePyramidPoolingModule DecoderModule ESANet Decoder Upsample main main ESANetOneModality ConvBNAct Swish swish Hswish SqueezeAndExcitationTensorRT SqueezeAndExcitation ConvBN load_pretrained_with_different_encoder_block conv1x1 ResNet ResNet18 NonBottleneck1D Bottleneck ResNet34 conv3x3 ResNet50 BasicBlock SqueezeAndExciteFusionAdd export finetune batch_size add_argument lr ArgumentParser validate write_logs SGD get_data adjust_learning_rate max load_state_dict weight_file results_dir parse_args to save_ckpt range format build_model copy lr load join pop print num_examples extend parameters CSVLogger train epochs makedirs print load AUTOTUNE prefetch sample_distorted_bounding_box constant random_flip_left_right slice reshape transpose resize constant central_crop reshape transpose resize print to Classifier device batch_size model zero_grad numpy display from_numpy to update size avg item enumerate time criterion backward AverageMeter accuracy dict ProgressMeter step len eval AverageMeter ProgressMeter param_groups lr print join format save IMREAD_UNCHANGED imread cvtColor COLOR_BGR2RGB set_common_args parse_args ArgumentParserRGBDSegmentation add_argument asarray CLASS_COLORS print check_output nbytes volume get_binding_shape mem_alloc Stream nptype get_binding_dtype pagelocked_empty num_bindings range append time memcpy_htod copy range alloc_buf get_engine execute create_execution_context numpy memcpy_dtoh append len time SessionOptions copy InferenceSession set_providers append ORT_ENABLE_ALL range len warn ArgumentParserRGBDSegmentation set_common_args validate write_logs CrossEntropyLoss2dForValidDataUnweighted save_ckpt_every_epoch dataset get_optimizer ones len strftime cameras CrossEntropyLoss2dForValidData train_one_epoch results_dir parse_args n_classes_without_void sum append range get_lr modality save_ckpt format build_model last_ckpt copy ConfusionMatrixTensorflow pop join int load_ckpt print valid_full_res CrossEntropyLoss2d prepare_data extend named_parameters CSVLogger dict parameters OneCycleLR split compute_class_weights epochs makedirs model loss_function_train print_log dataset step append to sum get_lr mean enumerate time backward isnan parameters dict train numpy len time compute_whole_loss compute_miou print reset_loss reset_conf_matrix dict enumerate split format print Adam SGD parameters optimizer children ESANet warn modules kaiming_normal_ nr_decoder_blocks constant_ finetune pretrained_scenenet ESANetOneModality load_state_dict append encoder state_dict update is_available enumerate load pop isinstance extend bias named_parameters he_init weight sum type diag DoubleTensor join deepcopy print valid_full_res get_preprocessor raw_depth DataLoader Dataset exists Compose extend print format enumerate print join format save load format print exit load_state_dict isfile join print read_csv idxmax exists fromarray list asarray flatten putpalette save list basename sorted extend filter isfile append walk zeros bitget array range tolist squeeze replace find zeros len join output_path makedirs dirname print format AdaptivePyramidPoolingModule PyramidPoolingModule Identity ESANet randn shape eval Variable ESANetOneModality pop load_pretrained_with_different_encoder_block print ResNet load_url eval load_state_dict sum __name__ pop load_pretrained_with_different_encoder_block print ResNet load_url load_state_dict sum __name__ pop print ResNet load_url load_state_dict sum load join sum format print OrderedDict load_state_dict is_available idxmax read_csv print join | # ESANet: Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis [](https://paperswithcode.com/sota/semantic-segmentation-on-sun-rgbd?p=efficient-rgb-d-semantic-segmentation-for) [](https://paperswithcode.com/sota/semantic-segmentation-on-nyu-depth-v2?p=efficient-rgb-d-semantic-segmentation-for) [](https://paperswithcode.com/sota/semantic-segmentation-on-cityscapes?p=efficient-rgb-d-semantic-segmentation-for) > You may also want to have a look at our follow-up work [EMSANet](https://github.com/TUI-NICR/EMSANet) (multi-task approach, better results for semantic segmentation, and cleaner and more extendable code base) This repository contains the code to our paper "Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis" ([IEEE Xplore](https://ieeexplore.ieee.org/document/9561675), [arXiv](https://arxiv.org/pdf/2011.06961.pdf)). Our carefully designed network architecture enables real-time semantic segmentation on a NVIDIA Jetson AGX Xavier and, thus, is well suited as a common initial processing step in a complex system for real-time scene | 1,034 |
TUM-LMF/BreizhCrops | ['time series'] | ['BreizhCrops: A Time Series Dataset for Crop Type Mapping'] | examples/evaluate.py processing/create_h5py.py breizhcrops/models/MSResNet.py breizhcrops/models/__init__.py breizhcrops/models/pretrained.py breizhcrops/models/TempCNN.py breizhcrops/__init__.py processing/write_classmapping.py processing/query_gee.py tests/data.py breizhcrops/models/StarRNN.py examples/train.py processing/write_tileids.py breizhcrops/models/OmniScaleCNN.py tests/training.py tests/package.py breizhcrops/datasets/urls.py examples/tune.py breizhcrops/models/LongShortTermMemory.py breizhcrops/datasets/breizhcrops.py tests/models.py breizhcrops/models/InceptionTime.py setup.py breizhcrops/models/TransformerModel.py processing/write_annotated_shp.py breizhcrops/utils.py untar unzip download_file update_progress DownloadProgressBar get_default_transform get_default_target_transform BreizhCrops ShortcutLayer InceptionModule InceptionTime LSTM BasicBlock3x3 conv3x3 conv5x5 MSResNet BasicBlock7x7 BasicBlock5x5 conv7x7 SampaddingConv1D_BN generate_layer_parameter_list get_out_channel_number build_layer_with_layer_parameter get_Prime_number_in_a_range OmniScaleCNN pretrained _download_and_load_weights StarRNN StarLayer StarCell TempCNN Conv1D_BatchNorm_Relu_Dropout Flatten FC_BatchNorm_Relu_Dropout Flatten TransformerModel select_hyperparameter main save parse_args get_dataloader metrics train train_epoch test_epoch parse_args get_model parse_args tune parse shapely2ee query load_geojson main parse_args main main check_exists test_download_h5dataset test_breizhcrops_geodataframe test_evaluate_models_fast evaluate_models test_evaluate_models test_get_codes_breizhcrops test_models_dummy_data test_init_breizhcrops test_breizhcrops_index_columnames test_pretrained test_data_value_range test_urls test_get_model test_belle_ile test_training int format isinstance write float round flush dirname print array append range int sum append get_out_channel_number get_Prime_number_in_a_range len join basename gettempdir download_file load_state_dict to eval lower _download_and_load_weights state_dict print dict dirname makedirs workers model batchsize fold save device datapath str Adam test_epoch logdir CrossEntropyLoss range get_dataloader classification_report join print select_hyperparameter train_epoch parameters cpu modelname get_model numpy makedirs add_argument ArgumentParser workers model batchsize device datapath list Adam test_epoch level preload_ram append logdir CrossEntropyLoss range get_dataloader join metrics set_index print train_epoch to_csv parameters mode cpu modelname get_model epochs makedirs ConcatDataset print dict DataLoader abspath BreizhCrops to lower recall_score precision_score cohen_kappa_score f1_score accuracy_score list train eval dict split dict int train choice format to_crs len list zip filterBounds DataFrame items T dict to_file read_csv merge check_exists ID join list str append exists BreizhCrops skip len geodataframe pretrained DataLoader test_epoch numpy BreizhCrops CrossEntropyLoss evaluate_models evaluate_models get_model rand model BreizhCrops BreizhCrops zeros list columns sort zip BreizhCrops check geodataframe BreizhCrops get_codes get_model to exp torchmodel parse_args train | TUM-LMF/BreizhCrops | 1,035 |
TUM-LMF/FutureGAN | ['video prediction'] | ['FutureGAN: Anticipating the Future Frames of Video Sequences using Spatio-Temporal 3d Convolutions in Progressively Growing GANs'] | video_dataset.py train.py custom_layers.py optflow.py eval.py data/generate_mmnist.py tb_logger.py plot_eval.py data/preprocess_kth.py utils.py eval_metrics.py model.py EqualizedConvTranspose3d EqualizedLinear GeneralizedDropOut Concat EqualizedConv3d MinibatchStdConcatLayer FadeInLayer PixelwiseNormLayer Flatten evaluate_pred fspecial_gauss calculate_mse calculate_ssim2 calculate_ssim calculate_psnr hox_downsample calculate_ms_ssim deconv_t deconv linear FutureGenerator get_module_names conv Discriminator deepcopy_module makeLegend Logger Trainer add_border make_image_grid make_video_grid get_out_dim_conv_transpose save_image_grid get_image_grid get_out_dim_conv count_model_params save_video_grid video_loader VideoFolder default_image_loader find_videos make_dataset accimage_loader has_file_allowed_extension pil_loader arr_from_img get_random_images load_dataset main get_comb_name generate_moving_mnist save_dataset data zeros_like model batch_size calculate_metric save_image_grid resl DataLoader unsqueeze save cuda test_dir FloatTensor len nframes_in getattr load_state_dict iter count_model_params append experiment_name next range detach format nc size Compose VideoFolder npx_border mean eval model_path out_border load int join nframes_pred metrics print Variable mimsave tqdm get_image_grid in_border data_root deep_pred makedirs cpu numpy permute compare_mse numpy permute compare_psnr compare_ssim numpy permute exp fspecial_gauss reshape min mean shape numpy fftconvolve clip asarray inf calculate_ssim2 size append numpy array range BatchNorm3d Conv3d ReplicationPad3d EqualizedConv3d ReLU append LeakyReLU PixelwiseNormLayer BatchNorm3d ReplicationPad3d PixelwiseNormLayer ReLU append LeakyReLU EqualizedConvTranspose3d ConvTranspose3d GeneralizedDropOut BatchNorm3d Conv3d ReplicationPad3d EqualizedConv3d ReLU append LeakyReLU PixelwiseNormLayer EqualizedLinear Sigmoid append Flatten Linear named_children Sequential add_module load_state_dict state_dict append items product ptp min pi zeros cartToPolar make_grid fill_ size copy_ cpu range fromarray int NEAREST make_video_grid size save resize numpy cpu make_grid copy_ fill_ fromarray add_border make_image_grid NEAREST resize numpy array fromarray add_border make_image_grid NEAREST save resize numpy expand lower sort append exists default_image_loader image_loader join sorted find_videos floor has_file_allowed_extension append range walk len format print train_images where train_labels unique append test_images test_labels size product getdata append randint tuple rand pi paste fromarray list transpose map load_dataset append range format get_random_images keys zeros enumerate combinations print tqdm randint array len format makedirs print generate_moving_mnist save_dataset | TUM-LMF/FutureGAN | 1,036 |
TadejSkvorc/MICE | ['word embeddings', 'cross lingual transfer'] | ['MICE: Mining Idioms with Contextual Embeddings'] | models/tensorflow_bilstm_elmo_per_expression_multiple_files.py models/bow_classification.py models/tensorflow_bilstm_elmo_multiple_files_shuffled.py models/tensorflow_bilstm_bert_multiple_files_shuffled.py models/crosloengual_bert_multiple_files.py models/tensorflow_bilstm_bert_per_expression_multiple_files.py get_xy_per_expression bert_tensorflow_test get_already_processed get_xy_per_expression get_already_processed get_xy bert_tensorflow_test get_bert_data_per_expression get_bert_data get_xy_per_expression get_already_processed get_xy bert_tensorflow_test get_bert_data_per_expression get_bert_data get_xy_per_expression get_already_processed get_xy bert_tensorflow_test get_bert_data_per_expression get_bert_data get_xy_per_expression get_already_processed get_xy bert_tensorflow_test get_bert_data_per_expression get_bert_data print Counter Bidirectional evaluate print Sequential Counter add Dense predict_proba TimeDistributed append Masking argmax GRU Activation compile fit print shape get_xy print pad_sequences to_categorical get_xy shape train_test_split array | TadejSkvorc/MICE | 1,037 |
Tae-Eon/Gen-CUDE | ['denoising'] | ['Unsupervised Neural Universal Denoiser for Finite-Input General-Output Noisy Channel'] | synthetic/pytorch/AISTAT_4ary_Gen-CUDE-pytorch-nonsquare.py synthetic/keras/AISTAT_binary_Gen-CUDE.py synthetic/pytorch/AISTAT_4ary_Gen-CUDE-pytorch.py synthetic/pytorch/AISTAT_binary_Gen-CUDE-pytorch.py synthetic/pytorch/utils.py synthetic/keras/AISTAT_10ary_Gen-CUDE.py synthetic/pytorch/AISTAT_10ary_Gen-CUDE-pytorch.py synthetic/pytorch/MLP_based_models.py synthetic/keras/AISTAT_4ary_Gen-CUDE-nonsquare.py synthetic/keras/utils.py synthetic/keras/AISTAT_4ary_Gen-CUDE.py find_nearest_integer base_symbol set_seed transform_to_wide input_context_without_middle_symbol Quantizer_4ary con_noisy_awgn error_rate p_vector_from_wide_pdf_table middle_y decision_boundary net_output PRINT transform_to_narrow Quantizer_v1 init_params sym_mat source_generator_v2 MLP decision_boundary net_output PRINT find_nearest_integer set_seed input_context_without_middle_symbol con_noisy_awgn error_rate p_vector_from_wide_pdf_table Quantizer_v1 init_params base_symbol softmax sym_mat transform_to_wide Quantizer_4ary middle_y transform_to_narrow source_generator_v2 seed manual_seed print str write now arange ones range int random copy zeros float argmax range randn zeros range len round int_ zeros range int_ len zeros round range len dtype astype dtype astype zeros hstack range len zeros range len uniform_ parameters kaiming_normal_ to numpy arange base_symbol deepcopy arange int_ print range len base_symbol deepcopy int_ print range uniform kaiming_uniform_ reshape exp | # Gen-CUDE * Gen-CUDE is an unsupervised neural network-based universal denoiser for the finite-input, general-output channel. * Code accompanying the paper "Unsupervised Neural Universal Denoiser for Finite-Input General-Output Noisy Channel" (https://arxiv.org/abs/2003.02623). * Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS) 2020, Palermo, Italy. PMLR: Volume 108. Copyright 2020 by the author(s). # Requirements * Python 3.6.6 * CUDA v9.2 * Tensorflow v1.15.0 * Keras v2.2.4 # Results | 1,038 |
Taichi-Yoshikawa/NeuralNetwork | ['stochastic optimization'] | ['Adam: A Method for Stochastic Optimization'] | neuralnetwork.py configuration.py Configuration SoftmaxWithLoss Relu Affine NeuralNetwork Sigmoid | # 3-Layer Neural Network Feed Forward Neural Network (FFNN) <img src="https://miro.medium.com/max/2636/1*3fA77_mLNiJTSgZFhYnU0Q.png" width=70%> [cited from https://towardsdatascience.com/build-up-a-neural-network-with-python-7faea4561b31] ## Implementation - layer-approach implementation for expanding to Deep Learning ### Layer Structure 1. Affine 2. ReLU 3. Affine | 1,039 |
TaiwanRobert/iCAN_for_live_video | ['human object interaction detection'] | ['iCAN: Instance-Centric Attention Network for Human-Object Interaction Detection'] | frcnn/lib/setup.py ican/tools/.ipynb_checkpoints/Test_ResNet_HICO-checkpoint.py ican/misc/misc/Object_Detector.py ican/lib/ult/visualization.py frcnn/lib/utils/__init__.py ican/lib/ult/.ipynb_checkpoints/visualization-checkpoint.py frcnn/lib/model/train_val.py frcnn/lib/model/.ipynb_checkpoints/config-checkpoint.py frcnn/lib/model/__init__.py frcnn/lib/model/bbox_transform.py ican/tools/_init_paths.py frcnn/lib/utils/timer.py ican/lib/ult/ult.py ican/tools/.ipynb_checkpoints/Train_ResNet_HICO-checkpoint.py ican/lib/ult/apply_prior.py frcnn/lib/layer_utils/snippets.py ican/lib/models/test_HICO.py frcnn/lib/utils/visualization.py ican/lib/models/.ipynb_checkpoints/test_VCOCO-checkpoint.py frcnn/lib/layer_utils/proposal_target_layer.py ican/lib/models/.ipynb_checkpoints/test_demo-checkpoint.py frcnn/tools/Object_Detector.py frcnn/tools/.ipynb_checkpoints/demo-checkpoint.py frcnn/lib/datasets/voc_eval.py ican/misc/misc/.ipynb_checkpoints/resnet_v1-checkpoint.py ican/lib/models/.ipynb_checkpoints/train_Solver_HICO-checkpoint.py frcnn/tools/reval.py frcnn/lib/model/nms_wrapper.py frcnn/lib/layer_utils/generate_anchors.py ican/lib/ult/Generate_HICO_detection.py frcnn/lib/nets/vgg16.py ican/lib/ult/timer.py ican/lib/ult/Download_data.py frcnn/tools/.ipynb_checkpoints/trainval_net-checkpoint.py ican/tools/Train_ResNet_VCOCO.py frcnn/tools/.ipynb_checkpoints/Object_Detector-checkpoint.py frcnn/lib/datasets/coco.py ican/lib/ult/vsrl_eval.py frcnn/lib/roi_data_layer/minibatch.py frcnn/tools/demo.py frcnn/lib/roi_data_layer/layer.py ican/tools/Demo.py ican/lib/models/train_Solver_HICO.py ican/tools/Train_ResNet_HICO.py ican/lib/models/.ipynb_checkpoints/test_HICO-checkpoint.py frcnn/lib/datasets/factory.py frcnn/lib/roi_data_layer/__init__.py ican/misc/.ipynb_checkpoints/resnet_v1-checkpoint.py frcnn/lib/nets/resnet_v1.py frcnn/lib/utils/blob.py frcnn/lib/datasets/tools/mcg_munge.py ican/lib/networks/iCAN_ResNet50_VCOCO.py ican/lib/ult/.ipynb_checkpoints/ult-checkpoint.py frcnn/tools/.ipynb_checkpoints/test_net-checkpoint.py ican/lib/models/train_Solver_VCOCO.py frcnn/lib/.ipynb_checkpoints/setup-checkpoint.py frcnn/lib/model/.ipynb_checkpoints/nms_wrapper-checkpoint.py frcnn/lib/model/test.py frcnn/tools/_init_paths.py frcnn/lib/layer_utils/proposal_top_layer.py frcnn/lib/utils/.ipynb_checkpoints/timer-checkpoint.py frcnn/tools/.ipynb_checkpoints/_init_paths-checkpoint.py video.py frcnn/lib/layer_utils/anchor_target_layer.py frcnn/lib/datasets/ds_utils.py ican/misc/resnet_v1.py frcnn/tools/test_net.py ican/lib/ult/.ipynb_checkpoints/apply_prior-checkpoint.py ican/lib/networks/.ipynb_checkpoints/iCAN_ResNet50_VCOCO-checkpoint.py ican/tools/Test_ResNet_VCOCO.py frcnn/lib/layer_utils/proposal_layer.py frcnn/lib/roi_data_layer/roidb.py frcnn/lib/nets/network.py frcnn/tools/live.py ican/misc/Object_Detector.py frcnn/lib/datasets/__init__.py ican/tools/.ipynb_checkpoints/Demo-checkpoint.py frcnn/lib/nets/.ipynb_checkpoints/resnet_v1-checkpoint.py frcnn/tools/trainval_net.py ican/lib/models/test_VCOCO.py frcnn/lib/datasets/pascal_voc.py frcnn/tools/convert_from_depre.py frcnn/tools/.ipynb_checkpoints/live-checkpoint.py live.py ican/lib/models/test_demo.py frcnn/lib/datasets/imdb.py .ipynb_checkpoints/live-checkpoint.py frcnn/lib/model/config.py _init_paths.py ican/lib/ult/.ipynb_checkpoints/config-checkpoint.py frcnn/lib/model/.ipynb_checkpoints/test-checkpoint.py ican/lib/networks/.ipynb_checkpoints/iCAN_ResNet50_HICO-checkpoint.py ican/lib/ult/config.py ican/lib/networks/iCAN_ResNet50_HICO.py ican/tools/Diagnose_VCOCO.py .ipynb_checkpoints/video-checkpoint.py ican/tools/Test_ResNet_HICO.py frcnn/lib/nms/py_cpu_nms.py ican/lib/networks/iCAN_ResNet50_VCOCO_Early.py ican/tools/.ipynb_checkpoints/_init_paths-checkpoint.py frcnn/lib/nets/mobilenet_v1.py ican/lib/ult/.ipynb_checkpoints/Generate_HICO_detection-checkpoint.py ican/lib/ult/vcoco_diagnose.py ican/misc/misc/resnet_v1.py end_pt create_bbox create_text get_color main print_image end_pt create_bbox create_text get_color print_image add_path end_pt create_bbox create_text get_color main print_image end_pt create_bbox create_text get_color print_image find_in_path customize_compiler_for_nvcc custom_build_ext locate_cuda find_in_path customize_compiler_for_nvcc custom_build_ext locate_cuda coco unique_boxes xywh_to_xyxy validate_boxes xyxy_to_xywh filter_small_boxes get_imdb list_imdbs imdb pascal_voc parse_rec voc_eval voc_ap munge _unmap _compute_targets anchor_target_layer generate_anchors _scale_enum _whctrs _ratio_enum _mkanchors proposal_layer_tf proposal_layer _get_bbox_regression_labels _compute_targets proposal_target_layer _sample_rois proposal_top_layer proposal_top_layer_tf generate_anchors_pre generate_anchors_pre_tf clip_boxes bbox_transform bbox_transform_inv clip_boxes_tf bbox_transform_inv_tf cfg_from_list get_output_tb_dir cfg_from_file _merge_a_into_b get_output_dir nms _rescale_boxes _clip_boxes _get_blobs _get_image_blob apply_nms test_net im_detect train_net get_training_roidb filter_roidb SolverWrapper cfg_from_list get_output_tb_dir cfg_from_file _merge_a_into_b get_output_dir nms _rescale_boxes _clip_boxes _get_blobs _get_image_blob apply_nms test_net im_detect separable_conv2d_same mobilenetv1 mobilenet_v1_base mobilenet_v1_arg_scope Network resnet_arg_scope resnetv1 vgg16 resnet_arg_scope resnetv1 py_cpu_nms RoIDataLayer get_minibatch _get_image_blob prepare_roidb im_list_to_blob prep_im_for_blob Timer _draw_single_box draw_bounding_boxes Timer combined_roidb get_variables_in_checkpoint_file convert_from_depre convert_names parse_args parse_args vis_detections demo end_pt create_bbox create_text get_color main print_image ParseArgs bb_IOU run_frcnn demo parse_args from_dets parse_args parse_args combined_roidb add_path parse_args vis_detections demo end_pt create_bbox create_text get_color main print_image ParseArgs bb_IOU run_frcnn demo parse_args parse_args combined_roidb add_path get_blob test_net im_detect get_blob test_net im_detect get_blob test_net im_detect train_net SolverWrapper train_net SolverWrapper get_blob test_net im_detect get_blob test_net im_detect get_blob test_net im_detect train_net SolverWrapper resnet_arg_scope ResNet50 resnet_arg_scope ResNet50 resnet_arg_scope ResNet50 resnet_arg_scope ResNet50 resnet_arg_scope ResNet50 apply_prior download_file_from_google_drive save_HICO Generate_HICO_detection Timer Get_Next_Instance_HO_spNeg bbox_trans Augmented_box bb_IOU Augmented_HO_spNeg Get_next_sp Augmented_HO_Neg_HICO Generate_action Get_Next_Instance_HO_Neg Get_Next_Instance_HO_Neg_HICO Generate_action_HICO Augmented_HO_Neg _load_vcoco VCOCOdiagnose get_overlap clip_xyxy_to_image voc_ap draw_bounding_boxes_HOI_PIC _draw_single_box draw_bounding_boxes draw_bounding_boxes_HOI _load_vcoco get_overlap clip_xyxy_to_image voc_ap VCOCOeval apply_prior save_HICO Generate_HICO_detection Get_Next_Instance_HO_spNeg bbox_trans Augmented_box bb_IOU Augmented_HO_spNeg Get_next_sp Augmented_HO_Neg_HICO Generate_action Get_Next_Instance_HO_Neg Get_Next_Instance_HO_Neg_HICO Generate_action_HICO Augmented_HO_Neg draw_bounding_boxes_HOI_PIC _draw_single_box draw_bounding_boxes draw_bounding_boxes_HOI parse_args bb_IOU demo resnet_arg_scope resnetv1 resnet_arg_scope resnetv1 parse_args bb_IOU demo resnet_arg_scope resnetv1 resnet_arg_scope resnetv1 ParseArgs run_ican parse_args parse_args parse_args parse_args add_path ParseArgs run_ican parse_args parse_args add_path LINE_AA end_pt putText FONT_HERSHEY_COMPLEX_SMALL rectangle get_color str int line get_color items list uint8 print tuple create_bbox set add shape create_text fill zeros run_frcnn run_ican print print_image put reset_default_graph insert pathsep pjoin exists split find_in_path items list pjoin pathsep dirname sep append _compile compiler_so dot array unique int parse findall text append find arange concatenate size maximum sum max range parse_rec cumsum argmax max sum range eps format astype mkdir float enumerate minimum join print sort maximum voc_ap argsort zeros bool array len join format print rename splitext listdir makedirs RPN_BBOX_INSIDE_WEIGHTS _unmap bbox_overlaps argmax RPN_FG_FRACTION ones transpose sum RPN_BATCHSIZE ascontiguousarray choice fill RPN_POSITIVE_WEIGHT empty int RPN_CLOBBER_POSITIVES reshape _compute_targets zeros array fill empty vstack _ratio_enum array hstack sqrt _whctrs round _mkanchors _whctrs _mkanchors decode nms RPN_POST_NMS_TOP_N clip_boxes reshape hstack bbox_transform_inv RPN_NMS_THRESH RPN_PRE_NMS_TOP_N zeros to_float decode RPN_POST_NMS_TOP_N reshape concat bbox_transform_inv_tf clip_boxes_tf RPN_NMS_THRESH RPN_PRE_NMS_TOP_N gather zeros non_max_suppression FG_FRACTION BATCH_SIZE _sample_rois USE_GT reshape astype float32 vstack zeros round zeros BBOX_INSIDE_WEIGHTS int shape BBOX_NORMALIZE_STDS bbox_transform BBOX_NORMALIZE_MEANS BBOX_NORMALIZE_TARGETS_PRECOMPUTED array _get_bbox_regression_labels size min set_trace ascontiguousarray _compute_targets choice bbox_overlaps argmax max append RPN_TOP_N clip_boxes reshape hstack bbox_transform_inv choice zeros to_float RPN_TOP_N reshape concat bbox_transform_inv_tf clip_boxes_tf top_k gather zeros generate_anchors arange reshape transpose astype float32 int32 meshgrid generate_anchors constant reshape transpose multiply add stack meshgrid range transpose log dtype exp astype shape zeros minimum maximum dtype exp subtract multiply add cast minimum maximum join EXP_DIR name abspath ROOT_DIR makedirs join EXP_DIR name abspath ROOT_DIR makedirs items list ndarray isinstance type array _merge_a_into_b literal_eval zip split MAX_SIZE min astype float32 SCALES shape resize append im_list_to_blob float max _get_image_blob minimum maximum range _clip_boxes reshape bbox_transform_inv _get_blobs shape tile array test_image BBOX_REG nms range copy len seed nms num_classes tic imread range image_path_at im_detect image_index format evaluate_detections average_time hstack astype NMS toc join RNG_SEED print float32 get_output_dir len print prepare_roidb append_flipped_images USE_FLIPPED print format len ConfigProto filter_roidb pad l2_regularizer WEIGHT_DECAY REGU_DEPTH truncated_normal_initializer append maximum minimum USE_ALL_GT _get_image_blob randint empty array len prep_im_for_blob PIXEL_MEANS imread range len image_index toarray roidb argmax max range image_path_at len zeros max range len min astype float32 shape resize float max line Draw text rectangle ceil getsize fromarray int uint8 copy _draw_single_box round array range add_argument exit print_help ArgumentParser get_imdb imdb extend classes NewCheckpointReader get_variable_to_shape_map replace num_classes close ConfigProto Session makedirs format subplots set_title text draw axis tight_layout add_patch imshow Rectangle toc join nms format vis_detections print total_time astype float32 tic im_detect Timer imread DATA_DIR enumerate sleep play stop nan append minimum maximum ParseArgs demo matlab_eval evaluate_detections print get_imdb apply_nms NMS competition_mode comp_mode shape astype float32 reshape concatenate get_blob apply_prior append test_image_HO empty max range test_image_H zfill imread DATA_DIR int dump iglob DATA_DIR open rstrip join remove makedirs get get_confirm_token save_response_content Session int list items tolist min zfill savemat append range len join remove save_HICO makedirs copy zeros bbox_trans concatenate reshape float64 min astype floor randint max zeros reshape reshape astype float32 zfill shape Augmented_HO_Neg imread DATA_DIR len list concatenate Augmented_box reshape copy Generate_action sample empty range len reshape astype float32 zfill Augmented_HO_spNeg shape imread DATA_DIR len list concatenate Augmented_box reshape Generate_action sample empty range len zeros reshape reshape astype float32 zfill Augmented_HO_Neg_HICO shape imread DATA_DIR list concatenate Augmented_box reshape Generate_action_HICO sample empty range len T print reshape range len minimum maximum minimum maximum fromarray uint8 copy _draw_single_box round array enumerate fromarray uint8 copy _draw_single_box round array enumerate iteritems ParseArgs print object_thres prior_flag HOI_Detection human_thres test_net | # iCAN_for_live_video
該專案引用了https://github.com/vt-vl-lab/iCAN iCAN模型與https://github.com/endernewton/tf-faster-rcnn faster-rcnn模型
iCAN本身是行為偵側算法,輸入的資料內容為faster-rcnn輸出的結果。
因此我們可以簡單得理解iCAN本身並沒有產生bounding box的能力,他只是藉由faster-rcnn的輸出結果中的人與物的相對關係去計算出可能的行為。
詳細iCAN運作原理請看原作者論文https://arxiv.org/pdf/1808.10437v1.pdf
此部分的程式碼專門是屬於結果輸出的部分,因此沒有train部分的修改。大部分都保留原作者的代碼。
dataset and weight 的部分請在/ican執行
| 1,040 |
TalSchuster/FeverSymmetric | ['fact verification'] | ['Towards Debiasing Fact Verification Models'] | compute_fever_wordstats.py ngram_bias.py get_counters get_single_stopwords RegexpTokenizer join defaultdict print len Counter tqdm lower Reader iter most_common sum tokenize range values open RegexpTokenizer join defaultdict str get_single_stopwords ngrams print tqdm lower Reader iter tokenize open | # Towards Debiasing Fact Verification Models - Symmetric evaluation set based on the FEVER (fact verification) dataset - Regularization-based method # Symmetric dataset To download the symmetric evaluation dataset from the EMNLP 2019 paper [Towards Debiasing Fact Verification Models](https://arxiv.org/abs/1908.05267) use this [link](https://raw.githubusercontent.com/TalSchuster/FeverSymmetric/master/symmetric_v0.1/fever_symmetric_generated.jsonl). ## Version 0.2 We release a version that includes new cases. This version is split to dev (708 pairs) and test (712 pairs) to allow models to use the dev set for hyperparameter tuning. ## Version 0.1 The version used in "Towards Debiasing Fact Verification Models" paper. We've implemented the baseline and the reweighted version on the latest version of the pytorch-transformers repository ([link](https://github.com/TalSchuster/pytorch-transformers)). Since the test set is small, there are some random variations across different runs using different servers/GPUs. Therefore, to allow better comparison across methods, we've run the training five times with different random seeds and report the average and std of the runs: | 1,041 |
Tanasho0928/chat-oriented | ['word embeddings', 'response generation'] | ['A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues', 'Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models'] | utils/bleu.py utils/pmi.py utils/train.py auto_eval.py load_dailydialog.py utils/dataset.py model/encdec.py model/vhcr.py utils/build_vocab.py model/modules.py utils/dist.py utils/test.py utils/pad.py model/vhred.py main.py utils/batch_sampler.py utils/log_print.py model/hred.py compare extend_dial load_data save_data update_hparams parse_hparams get_argparse EncDec HRED Maxout Attn HREDDecoderRNN FFN EncoderRNN ContextRNN l2_pooling LuongAttnDecoderRNN sample_z VHCR VHRED collate_fn RandomBatchSampler get_bleu get_wwcount_matrix get_vocab get_pmi_matrix build_vocab DialDataset get_dist_1 get_dist_2 loginfo_and_print logerror_and_print pad_batch get_pmi chat test inference iteration run_epochs valid items format get_dist_2 loginfo_and_print get_bleu get_dist_1 get_pmi endswith join dirname range len join add_argument_group add_argument ArgumentParser update cuda update isinstance pad_batch dict Mapping keys join get_vocab format loginfo_and_print get_wwcount_matrix get_pmi_matrix len len Counter tqdm items csr_matrix Counter tqdm range enumerate items csr_matrix tqdm flatten log2 sum max print info error write stack max isinstance toarray zip tqdm vocab format tqdm eval set_description collate_fn inference enumerate format loginfo_and_print eval collate_fn input inference join format info range enumerate init_epoch format StepLR valid loginfo_and_print Adam tqdm parameters DataLoader set_description load_state_dict save info train step range enumerate eval train enumerate DataLoader | # chat-oriented Chat-oriented dialogue system implementation ## Requirements - pipenv - CUDA 9.0 - cuDNN 7.5.1 ## Usage #### Launching pipenv shell. ```sh pipenv shell | 1,042 |
Tanasho0928/ncm | ['word embeddings', 'response generation'] | ['A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues', 'Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models'] | utils/bleu.py utils/pmi.py utils/train.py auto_eval.py load_dailydialog.py utils/dataset.py model/encdec.py model/vhcr.py utils/build_vocab.py model/modules.py utils/dist.py utils/test.py utils/pad.py model/vhred.py main.py utils/batch_sampler.py utils/log_print.py model/hred.py compare extend_dial load_data save_data update_hparams parse_hparams get_argparse EncDec HRED Maxout Attn HREDDecoderRNN FFN EncoderRNN ContextRNN l2_pooling LuongAttnDecoderRNN sample_z VHCR VHRED collate_fn RandomBatchSampler get_bleu get_wwcount_matrix get_vocab get_pmi_matrix build_vocab DialDataset get_dist_1 get_dist_2 loginfo_and_print logerror_and_print pad_batch get_pmi chat test inference iteration run_epochs valid items format get_dist_2 loginfo_and_print get_bleu get_dist_1 get_pmi endswith join dirname range len join add_argument_group add_argument ArgumentParser update cuda update isinstance pad_batch dict Mapping keys join get_vocab format loginfo_and_print get_wwcount_matrix get_pmi_matrix len len Counter tqdm items csr_matrix Counter tqdm range enumerate items csr_matrix tqdm flatten log2 sum max print info error write stack max isinstance toarray zip tqdm vocab format tqdm eval set_description collate_fn inference enumerate format loginfo_and_print eval collate_fn input inference join format info range enumerate init_epoch format StepLR valid loginfo_and_print Adam tqdm parameters DataLoader set_description load_state_dict save info train step range enumerate eval train enumerate DataLoader | # chat-oriented Chat-oriented dialogue system implementation ## Requirements - pipenv - CUDA 9.0 - cuDNN 7.5.1 ## Usage #### Launching pipenv shell. ```sh pipenv shell | 1,043 |
TangrisJones/AI-dfa | ['face swapping'] | ['DeepFaceLab: Integrated, flexible and extensible face-swapping framework'] | models/Model_XSeg/Model.py mainscripts/VideoEd.py merger/MergerScreen/MergerScreen.py mainscripts/dev_misc.py merger/InteractiveMergerSubprocessor.py samplelib/SampleGeneratorImageTemporal.py core/leras/models/__init__.py samplelib/Sample.py core/joblib/__init__.py core/imagelib/estimate_sharpness.py core/joblib/MPFunc.py core/leras/layers/DenseNorm.py core/leras/nn.py samplelib/SampleProcessor.py core/leras/archis/__init__.py core/leras/layers/Conv2D.py core/stdex.py core/imagelib/text.py core/leras/models/PatchDiscriminator.py core/leras/__init__.py models/Model_XSeg/__init__.py core/imagelib/equalize_and_stack_square.py samplelib/__init__.py core/imagelib/sd/draw.py core/randomex.py core/joblib/MPClassFuncOnDemand.py core/leras/optimizers/__init__.py core/leras/initializers/__init__.py mainscripts/Extractor.py mainscripts/Sorter.py core/mathlib/__init__.py core/leras/optimizers/OptimizerBase.py core/imagelib/draw.py samplelib/SampleGeneratorBase.py core/leras/layers/BatchNorm2D.py core/leras/layers/ScaleAdd.py XSegEditor/QCursorDB.py models/__init__.py core/joblib/SubprocessGenerator.py DFLIMG/DFLIMG.py core/leras/layers/__init__.py models/ModelBase.py facelib/FaceEnhancer.py core/joblib/ThisThreadGenerator.py XSegEditor/QIconDB.py core/imagelib/blursharpen.py merger/MergeMasked.py core/cv2ex.py models/Model_Quick96/__init__.py main.py core/leras/archis/ArchiBase.py localization/localization.py facelib/LandmarksProcessor.py core/leras/layers/Dense.py models/Model_SAEHD/__init__.py mainscripts/Merger.py mainscripts/XSegUtil.py core/imagelib/filters.py XSegEditor/QStringDB.py core/leras/initializers/CA.py core/imagelib/__init__.py core/leras/models/ModelBase.py samplelib/SampleGeneratorFaceTemporal.py core/leras/layers/InstanceNorm2D.py facelib/S3FDExtractor.py samplelib/SampleLoader.py samplelib/SampleGeneratorFaceXSeg.py core/qtex/qtex.py DFLIMG/__init__.py core/leras/optimizers/RMSprop.py core/structex.py core/leras/layers/BlurPool.py facelib/__init__.py core/leras/archis/DeepFakeArchi.py core/imagelib/SegIEPolys.py mainscripts/FacesetEnhancer.py merger/FrameInfo.py models/Model_SAEHD/Model.py core/mplib/__init__.py samplelib/SampleGeneratorFacePerson.py merger/MergerScreen/__init__.py core/imagelib/common.py core/imagelib/sd/__init__.py core/leras/layers/AdaIN.py core/leras/layers/LayerBase.py facelib/FANExtractor.py core/leras/models/XSeg.py samplelib/SampleGeneratorFace.py merger/MergeAvatar.py core/pathex.py core/leras/layers/Conv2DTranspose.py core/qtex/QXMainWindow.py merger/__init__.py core/imagelib/morph.py core/joblib/SubprocessorBase.py mainscripts/Trainer.py samplelib/PackedFaceset.py core/mathlib/umeyama.py mainscripts/Util.py core/interact/__init__.py core/leras/layers/FRNorm2D.py core/leras/layers/Saveable.py core/leras/layers/TLU.py facelib/XSegNet.py core/mplib/MPSharedList.py core/leras/models/CodeDiscriminator.py facelib/FaceType.py XSegEditor/XSegEditor.py merger/MergerConfig.py core/leras/ops/__init__.py core/interact/interact.py core/leras/device.py core/imagelib/sd/calc.py core/imagelib/reduce_colors.py samplelib/SampleGeneratorImage.py localization/__init__.py DFLIMG/DFLJPG.py core/qtex/__init__.py core/qtex/QSubprocessor.py core/qtex/QXIconButton.py samplelib/SampleGeneratorFaceCelebAMaskHQ.py core/imagelib/warp.py core/osex.py core/imagelib/color_transfer.py colab-script.py models/Model_Quick96/Model.py process_dev_test process_merge process_videoed_cut_video process_train process_faceset_enhancer process_xsegeditor process_xsegapply process_xsegremove process_xsegremovelabels process_videoed_video_from_sequence process_xsegfetch process_util process_extract fixPathAction process_videoed_extract_video process_sort bad_args process_videoed_denoise_image_sequence cv2_imwrite cv2_imread set_process_dpi_aware get_screen_size set_process_lowest_prio get_image_paths move_all_files write_bytes_safe get_first_file_by_stem get_image_unique_filestem_paths get_all_dir_names get_file_paths delete_all_files scantree get_paths get_all_dir_names_startswith random_normal suppress_stdout_stderr struct_unpack LinearMotionBlur blursharpen _scale_array color_transfer color_transfer_idt color_transfer_mkl reinhard_color_transfer laplacian_matrix lab_image_stats seamless_clone linear_color_transfer channel_hist_match color_transfer_sot color_transfer_mix color_hist_match overlay_alpha_image cut_odd_image normalize_channels draw_polygon draw_rect equalize_and_stack_square compute _calculate_sharpness_metric marziliano_method get_block_contrast _simple_thinning estimate_sharpness is_edge_block sobel apply_random_motion_blur apply_random_rgb_levels apply_random_hsv_shift apply_random_bilinear_resize apply_random_gaussian_blur morphTriangle morph_by_points applyAffineTransform reduce_colors SegIEPolys SegIEPolyType SegIEPoly get_text_image draw_text_lines draw_text _get_pil_font get_draw_text_lines warp_by_params gen_warp_params dist_to_edges random_circle_faded circle_faded InteractBase InteractColab InteractDesktop MPClassFuncOnDemand MPFunc SubprocessGenerator Subprocessor ThisThreadGenerator Devices Device nn ArchiBase DeepFakeArchi CAInitializerSubprocessor initializers AdaIN BatchNorm2D BlurPool Conv2D Conv2DTranspose Dense DenseNorm FRNorm2D InstanceNorm2D LayerBase Saveable ScaleAdd TLU CodeDiscriminator ModelBase PatchDiscriminator XSeg dssim concat average_gv_list resize2d_bilinear flatten rgb_to_lab space_to_depth tf_gradients random_binomial style_loss gelu init_weights tf_get_value upsample2d reshape_4D batch_set_value max_pool average_tensor_list gaussian_blur depth_to_space OptimizerBase RMSprop umeyama get_power_of_two rotationMatrixToEulerAngles polygon_area ArrayFillerSubprocessor MPSharedList IndexHost Index2DHost ListHost DictHostCli DictHost QSubprocessor QDarkPalette QActionEx QSize_to_np QImage_from_np QImage_to_np QPixmap_from_np QPoint_to_np QPoint_from_np QXIconButton QXMainWindow DFLIMG DFLJPG FaceEnhancer FaceType FANExtractor blur_image_hull_mask mirror_landmarks get_face_struct_mask estimate_pitch_yaw_roll convert_98_to_68 expand_eyebrows get_rect_from_landmarks get_transform_mat draw_rect_landmarks get_cmask transform_points estimate_averaged_yaw calc_face_pitch alpha_to_color get_image_eye_mask draw_landmarks get_image_hull_mask S3FDExtractor XSegNet dev_test_68 dev_test1 dev_resave_pngs extract_vggface2_dataset extract_umd_csv dev_segmented_trash process_folder FacesetEnhancerSubprocessor extract_video video_from_sequence denoise_image_sequence cut_video remove_xseg remove_xseg_labels apply_xseg fetch_xseg FrameInfo InteractiveMergerSubprocessor MergeFaceAvatar process_frame_info MergeMasked MergeMaskedFace MergerConfigMasked MergerConfigFaceAvatar MergerConfig ScreenManager ScreenAssets Screen ModelBase import_model QModel SAEHDModel XSegModel PackedFaceset Sample SampleType SampleGeneratorBase SampleGeneratorFace SampleGeneratorFaceCelebAMaskHQ MaskType SampleGeneratorFacePerson SampleGeneratorFaceTemporal SampleGeneratorFaceXSeg SegmentedSampleFilterSubprocessor SampleGeneratorImage SampleGeneratorImageTemporal FaceSamplesLoaderSubprocessor SampleLoader SampleProcessor QCursorDB QIconDB QStringDB ImagePreviewSequenceBar QUIConfig QCanvasOperator LoaderQSubprocessor CanvasConfig OpMode QCanvas DragType ViewLock ColorScheme QCanvasControlsLeftBar start QCanvasControlsRightBar MainWindow PTEditMode main set_process_lowest_prio main set_process_lowest_prio unpack_faceset pack save_faceset_metadata log_info restore_faceset_metadata_folder pack_faceset save_faceset_metadata_folder restore_faceset_metadata Path input_dir unpack recover_original_aligned_filename set_process_lowest_prio add_landmarks_debug_images main set_process_lowest_prio main set_process_lowest_prio output_ext fps extract_video output_dir input_file set_process_lowest_prio audio_track_id from_time bitrate to_time cut_video input_file set_process_lowest_prio factor denoise_image_sequence set_process_lowest_prio input_dir video_from_sequence set_process_lowest_prio Path set_process_lowest_prio input_dir process_folder dev_test set_process_lowest_prio input_dir start Path set_process_lowest_prio input_dir model_dir apply_xseg Path input_dir set_process_lowest_prio Path remove_xseg set_process_lowest_prio input_dir remove_xseg_labels Path set_process_lowest_prio input_dir Path fetch_xseg set_process_lowest_prio input_dir print_help exit loader_func asarray bytearray imencode suffix nice SetPriorityClass HANDLE GetCurrentProcess SetProcessDPIAware user32 write_bytes parent name unlink rename exists is_dir scandir str list scandir any Path scantree exists append remove get_image_paths name stem set add verbose_print_func Path exists Path exists Path exists str list lower scandir Path startswith append exists str sorted list path lower scandir Path exists name Path rename get_file_paths unlink Path get_file_paths normal empty prod range calcsize warpAffine ones getRotationMatrix2D zeros sum medianBlur addWeighted ones zeros GaussianBlur max dtype reshape astype copy argsort shape bilateralFilter fill empty range eps T clip reshape eig dot shape sqrt cov mean diag T reshape min astype float32 empty_like solve dot shape histogram interp max range tolil setdiag lil_matrix reshape laplacian_matrix shape flatten dot argwhere append range tocsc _scale_array uint8 astype float32 merge lab_image_stats COLOR_LAB2BGR cvtColor split T reshape transpose mean dot eigh eye cholesky split min max float64 astype shape unique interp ravel dtype astype shape channel_hist_match range uint8 astype float32 COLOR_BGR2LAB color_transfer_sot COLOR_LAB2BGR cvtColor uint8 color_transfer_idt color_transfer_mkl astype float32 reinhard_color_transfer linear_color_transfer color_transfer_sot clip shape repeat len shape shape range tuple line range len draw_polygon concatenate shape resize expand_dims max enumerate T convolve square mean sqrt array shape zeros float64 marziliano_method astype canny sobel gradient atan2 shape any zeros round range int exp slice get_block_contrast shape flipud round zeros is_edge_block rot90 range cvtColor COLOR_BGR2GRAY rand random clip array COLOR_HSV2BGR random merge COLOR_BGR2HSV randint clip cvtColor split LinearMotionBlur randint random randint GaussianBlur random int rand random shape resize INTER_LINEAR float32 getAffineTransform float32 fillConvexPoly shape boundingRect int32 applyAffineTransform zeros expand_dims array shape morphTriangle zeros simplices fromarray uint8 convert astype COLOR_RGB2BGR array cvtColor truetype asarray Draw get_default_ttf_font_name concatenate text new _get_pil_font shape clip draw_text range len draw_text_lines zeros shape T random astype copy float32 getRotationMatrix2D dict uniform linspace random_normal warpAffine remap norm clip einsum concatenate norm reshape empty abs clip max random randint initializer inputs append batch_set_value run gradients expand_dims __enter__ __exit__ enumerate reduce_mean __enter__ __exit__ concat pow tanh sqrt pi as_list reshape tile transpose value resize transpose reshape transpose randint float32 pad make_kernel tile depthwise_conv2d gaussian_blur dtype constant arange reshape float32 square reduce_mean reducer cast softmax tile as_list reshape transpose as_list reshape transpose constant reshape multiply matmul cast svd T ones matrix_rank mean dot eye sum diag sqrt atan2 shape Format_Grayscale8 Format_BGR888 Format_ARGB32 height reshape convertToFormat width constBits setsize range squeeze invertAffineTransform shape transform expand_dims get norm getAffineTransform polygon_area astype float32 transform_points sqrt estimate_averaged_yaw array transform_points FULL_NO_ALIGN get_transform_mat float32 array copy concatenate expand_eyebrows fillConvexPoly convexHull zeros int getStructuringElement astype fillConvexPoly MORPH_ELLIPSE convexHull dilate zeros GaussianBlur shape zeros concatenate process copy blend alpha_to_color zeros get_image_hull_mask gdf max clip int blur getStructuringElement min erode argwhere MORPH_ELLIPSE expand_dims copy draw_landmarks zeros expand_eyebrows concatenate polylines tuple shape get_image_hull_mask array circle get_transform_mat draw_rect transform_points draw_polygon draw_landmarks array array rotationMatrixToEulerAngles concatenate astype float32 pi solvePnP zeros array clip get pop get_image_paths parent log_info name stem progress_bar_generator get_all_dir_names Path mkdir run fromString split cv2_imread Path normalize_channels exists input_bool str log_info name stem append get_image_paths get_rect_from_landmarks unlink mkdir parent cv2_imwrite progress_bar_generator read_text split get str get_image_paths parent log_info name len unlink Path mkdir split log_err run range exists fromString input_bool get_image_paths progress_bar_generator get_all_dir_names Path x get_image_paths cv2_imwrite progress_bar_generator cv2_imread Path get_image_paths parent name stem rename Path mkdir append input_bool join get_image_paths log_info parent name copy unlink rmtree mkdir run update str get_image_paths parent input_str stem output get_first_file_by_stem unlink input_int mkdir Path log_err input run str suffix parent input_str stem overwrite_output input_int log_err Path input max run update str suffix parent progress_bar_generator output input_int rename log_err Path run clip enumerate suffix input_str wait input_int Path max input_bool str stem input update run_async get_image_paths close mkdir parent overwrite_output get_first_file_by_stem log_err probe load extract initialize get_image_paths log_info set_xseg_mask progress_bar_generator astype float32 get_resolution ask_choose_device shape XSegNet resize save load str get_image_paths log_info parent name has_polys progress_bar_generator copy get_seg_ie_polys mkdir load get_image_paths log_info set_xseg_mask input_str progress_bar_generator has_xseg_mask save load get_image_paths log_info input_str has_seg_ie_polys progress_bar_generator save set_seg_ie_polys warpAffine get_transform_mat astype float32 cv2_imread normalize_channels filename clip sharpen_func sharpen_mode concatenate predictor_func add_source_image process_frame_info temporal_face_count append range sharpen_amount predictor_func color_transfer_mkl motion_power bicubic_degrade_power motion_blur_power linear_color_transfer color_transfer_mix boundingRect resize reduce_colors max clip face_enhancer_func hist_match_threshold medianBlur super_resolution_power WARP_INVERSE_MAP ones LinearMotionBlur shape pad blur_mask_modifier image_denoise_power masked_hist_match blursharpen range color_hist_match BORDER_TRANSPARENT warpAffine sharpen_mode xseg_256_extract_func seamlessClone color_transfer_idt astype copy reinhard_color_transfer empty_like motion_deg INTER_CUBIC MORPH_ELLIPSE color_transfer_sot dilate GaussianBlur get_image_hull_mask NORMAL_CLONE uint8 int erode_mask_modifier getStructuringElement get_transform_mat float32 erode argwhere blursharpen_amount color_degrade_power landmarks_list concatenate astype float32 cv2_imread shape normalize_channels MergeMaskedFace filepath clip enumerate str parent cv2_imread locals __import__ globals dict setApplicationName setPalette QDarkPalette Path show str initialize log_info setWindowIcon addApplicationFont AA_EnableHighDpiScaling setStyle setFont gettempdir setAttribute QApplication path_contains app_icon MainWindow exec_ parent QFont raise_ AA_UseHighDpiPixmaps | <table align="center" border="0"><tr><td align="center" width="9999"> # DeepFaceLab <a href="https://arxiv.org/abs/2005.05535"> <img src="https://static.arxiv.org/static/browse/0.3.0/images/icons/favicon.ico" width=14></img> https://arxiv.org/abs/2005.05535</a> ### the leading software for creating deepfakes <img src="doc/DFL_welcome.png" align="center"> </td></tr> <tr><td align="center" width="9999"> <p align="center"> | 1,044 |
TeCSAR-UNCC/person-reid | ['person re identification'] | ['Real-time Person Re-identification at the Edge: A Mixed Precision Approach'] | tri_loss/dataset/TrainSet.py tri_loss/utils/distance.py script/experiment/train.py tri_loss/model/Model.py tri_loss/utils/dataset_utils.py script/experiment/visualize_rank_list.py tri_loss/model/shufflenetv2.py tri_loss/utils/visualization.py tri_loss/dataset/PreProcessImage.py tri_loss/model/resnet.py tri_loss/model/loss.py tri_loss/utils/metric.py tri_loss/model/effnet.py script/dataset/combine_trainval_sets.py script/dataset/transform_cuhk03.py tri_loss/dataset/Dataset.py tri_loss/utils/extract_weights.py script/dataset/mapping_im_names_duke.py script/dataset/mapping_im_names_market1501.py tri_loss/dataset/Prefetcher.py script/dataset/transform_market1501.py script/experiment/infer_images_example.py tri_loss/utils/re_ranking.py tri_loss/utils/utils.py tri_loss/dataset/__init__.py tri_loss/model/ShuffleNetV2.py tri_loss/model/MobileNetV2.py tri_loss/model/TripletLoss.py tri_loss/dataset/TestSet.py script/dataset/transform_duke.py combine_trainval_sets move_ims parse_original_im_name map_im_names save_im_name_mapping parse_original_im_name map_im_names save_im_name_mapping transform save_images transform parse_original_im_name save_images transform parse_original_im_name save_images Config main pre_process_im Config main ExtractFeature Config main ExtractFeature Dataset Enqueuer Prefetcher Counter PreProcessIm TestSet TrainSet create_dataset EffNet Flatten normalize euclidean_dist global_loss hard_example_mining conv_1x1_bn mobileNetFeature InvertedResidual conv_bn remove_fc MobileNetV2 Model resNetAvgpooling ResNet resnet50 Bottleneck resnet152 conv3x3 resnet50AvgPooling remove_fc resnet34 resnet18 BasicBlock resnet101 conv_1x1_bn ShuffleNetFeature InvertedResidual conv_bn channel_shuffle remove_fc ShuffleNetV2 shufflenetFeature shufflenetv2 SplitBlock test DownBlock ShuffleBlock ShuffleNetV2 BasicBlock TripletLoss get_im_names parse_im_name move_ims partition_train_val_set normalize compute_dist save_models mean_ap _unique_sample cmc re_ranking load_pickle measure_time tight_float_str may_set_mode save_pickle save_mat find_index adjust_lr_exp set_devices may_transfer_optims set_seed adjust_lr_staircase transfer_optim_state may_transfer_modules_optims load_state_dict is_iterable save_ckpt TransferModulesOptims get_model_wrapper time_str TransferVarTensor to_scalar str2bool may_make_dir set_devices_for_ml load_ckpt RunningAverageMeter AverageMeter ReDirectSTD print_array RecentAverageMeter add_border get_rank_list save_rank_list_to_im read_im make_im_grid save_im list defaultdict format sort copy set dict zip append ospj range len save_pickle load_pickle format print sort zip ospj range move_ims may_make_dir int basename defaultdict format append parse_im_name join format get_im_names sort print set dict isdisjoint zip map_im_names save_pickle len join dump format print File write zip chain range may_make_dir save_pickle join load_pickle list save_images partition_train_val_set format print sort set dict dirname abspath zip range exists may_make_dir len get_im_names sort cumsum dict enumerate dirname abspath append save_pickle move_ims len set array astype array im_mean resize Config TMO model sys_device_ids set_devices image_dir shape Model load_state_dict base append expanduser format asarray pre_process_im concatenate eval float enumerate load saved_feature_mat_file print get_im_names Variable convert TVT savemat numpy len staircase_decay_multiply_factor validate model_w zero_grad DataParallel exp_decay_at_epoch may_set_mode dataset base_lr cuda global_loss seed adjust_lr_exp initialize set_seed adjust_lr_staircase total_epochs stderr_file Adam log_to_file pprint save_ckpt range update val SummaryWriter __dict__ test mean resume avg stdout_file init to_scalar long net add_scalars time load_ckpt AverageMeter staircase_decay_at_epochs dict ReDirectSTD parameters only_test TripletLoss create_dataset next_batch step ckpt_file decode ExtractFeature list save_rank_list_to_im rank_list_size RandomState im_dir zip isinstance get_rank_list compute_dist argsort exp_dir ospj set_feat_func update load_pickle list format print ospeu set dict ospj sum TrainSet TestSet len expand_as t sqrt addmm_ expand data ne view size min squeeze expand t eq gather max normalize euclidean_dist hard_example_mining tri_loss list keys startswith load format print remove_fc load_state_dict isfile quit MobileNetV2 load_url ResNet remove_fc load_state_dict load_url ResNet remove_fc load_state_dict load_url ResNet remove_fc load_state_dict load_url ResNet remove_fc load_state_dict load_url ResNet remove_fc load_state_dict resNetAvgpooling print ResNet load_url remove_fc load_state_dict size view contiguous ShuffleNetFeature load format print remove_fc load_state_dict isfile ShuffleNetV2 quit randn print shape ShuffleNetV2 net int glob join array join basename parse_im_name seed int list remove arange setdiff1d sort hstack shuffle set flatten dict unique append array len norm sqrt T normalize matmul load initialize Adam base half dict parameters Model load_state_dict save export cuda state_dict zeros items choice defaultdict cumsum argsort shape _unique_sample zip append zeros range enumerate len format print average_precision_score argsort shape __version__ zeros range minimum exp zeros_like concatenate transpose astype float32 mean int32 unique append zeros sum max range len print abspath dirname may_make_dir dict savemat is_tensor isinstance Parameter items isinstance cpu cuda transfer_optim_state state Optimizer isinstance format Optimizer Module isinstance print state transfer_optim_state cpu cuda __name__ TransferModulesOptims TransferVarTensor find_index list remove TransferModulesOptims sort set TransferVarTensor append load format print load_state_dict zip dict dirname abspath save may_make_dir data items isinstance print set copy_ keys state_dict eval Module train isinstance makedirs seed format enabled print manual_seed print enumerate param_groups float rstrip print find_index rstrip param_groups print print time format ndarray isinstance copy dtype ndarray isinstance astype enumerate argsort append zip resize transpose asarray open ospdn transpose may_make_dir save add_border read_im zip append save_im make_im_grid len | # Person Re-identification (person-reid) A framewrok enabled mixed precision person re-identification. Currently it supports two ResNet-50 and MobileNet-V2. The pre-trained networks for both half-precision and single precision are avaialable in `expt_res` folder.  ## Prerequisites * PyTorch (V 1.0.0) * Apex: available [here](https://github.com/NVIDIA/apex) * Python 3 ## Installation ### Datasets and pre-requirements The core of this code is based on provided framework available [here](https://github.com/huanghoujing/person-reid-triplet-loss-baseline). Please follow the mentioned instruction about datasets and switches. | 1,045 |
Team-Neighborhood/awesome-face-detection | ['face detection', 'face alignment'] | ['Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks'] | S3FD/data/factory.py mtcnn/first_stage.py S3FD/layers/modules/multibox_loss.py mtcnn/detector.py mtcnn/box_utils.py fr-hog.py S3FD/layers/modules/l2norm.py _add_path.py S3FD/data/config.py fr-cnn.py S3FD/tools/detect.py mtcnn/visualization_utils.py ocv-haar.py S3FD/layers/functions/__init__.py yxl-s3fd.py S3FD/layers/functions/detection.py S3FD/layers/bbox_utils.py di-insight.py S3FD/layers/__init__.py S3FD/s3fd.py dan-mtcnn.py ocv-dnn.py mtcnn/get_nets.py S3FD/utils/augmentations.py dlib-cnn.py S3FD/layers/modules/__init__.py mtcnn/__init__.py S3FD/layers/functions/prior_box.py dlib-hog.py S3FD/__init__.py detect_image detect_image add_path calibrate_box nms correct_bboxes convert_to_square _preprocess get_image_boxes detect_faces _generate_bboxes run_first_stage RNet ONet Flatten PNet show_bboxes build_s3fd multibox S3FD add_extras vgg dataset_factory detection_collate decode nms intersect match_ssd log_sum_exp jaccard center_size match point_form encode Detect PriorBox L2Norm dyna_anchor int astype copy shape detect resize append range len data getTickCount COLOR_BGR2RGB getTickFrequency size sqrt to_chw_bgr unsqueeze Tensor numpy cuda cvtColor insert minimum concatenate maximum delete argsort append len maximum zeros_like expand_dims hstack correct_bboxes asarray _preprocess resize zeros range len expand_dims transpose calibrate_box nms ONet convert_to_square FloatTensor reshape min expand_dims eval numpy vstack append RNet get_image_boxes round PNet nms asarray FloatTensor _preprocess _generate_bboxes resize numpy net vstack array where rectangle circle range copy Conv2d enumerate enumerate vgg add_extras multibox WIDERDetection VAL_FILE TRAIN_FILE VOCDetection DIR append FloatTensor clamp size min expand max intersect expand_as squeeze_ lt sort size jaccard clone gt index_fill_ eq point_form encode sum max range squeeze_ size jaccard index_fill_ point_form encode max range log cat max mul sort new clamp index_select resize_as_ long | # Awesome face detection Compare face detectors - Dlib, OpenCV, Others.. <br> <br> <p align="center"> <img src='./result.png' width=70%> <br> We are neighborhood </p> --- | 1,046 |
Tencent/FaceDetection-DSFD | ['face detection', 'data augmentation'] | ['DSFD: Dual Shot Face Detector'] | layers/box_utils.py layers/__init__.py widerface_val.py data/__init__.py model/fpn.py layers/functions/detection.py layers/modules/l2norm.py face_ssd.py data/config.py utils/__init__.py layers/functions/__init__.py utils/augmentations.py model/resnet.py layers/modules/multibox_loss.py data/coco.py data/voc0712.py demo.py data/widerface.py layers/modules/__init__.py fddb_test.py model/detnet_backbone.py layers/functions/prior_box.py infer_flip write_to_txt vis_detections infer_multi_scale_sfd infer test_oneimage FEM multibox arm_multibox DeepHeadModule add_extras pa_multibox SSD vgg build_ssd test_fddbface flip_test bbox_vote write_to_txt multi_scale_test multi_scale_test_pyramid flip_test vis_detections detect_face test_widerface COCOAnnotationTransform get_label_map COCODetection VOCDetection VOCAnnotationTransform WIDERFaceAnnotationTransform WIDERFaceDetection detection_collate TestBaseTransform base_transform test_base_transform BaseTransform decode nms refine_match intersect log_sum_exp jaccard center_size match point_form encode sfd_match Detect PriorBox L2Norm MultiBoxLoss focalLoss DetNet detnet59 BottleneckB load_pretrained_imagenet_weights BottleneckA _FPN ResNet resnet50 Bottleneck resnet152 conv3x3 resnet resnet34 resnet18 BasicBlock resnet101 SwapChannels ToTensor RandomCrop ToAbsoluteCoords RandomBrightness PhotometricDistort RandomSaturation Resize RandomSampleCrop ToPercentCoords intersect SSDAugmentation Lambda Compose ConvertColor RandomBaiduCrop Expand SubtractMeans jaccard_numpy RandomHue ConvertFromInts RandomMirror RandomContrast ToCV2Image RandomLightingNoise format write range data Variable size range unsqueeze array permute resize append Tensor numpy cuda net infer zeros flip shape infer row_stack format subplots set_title print save_folder axis tight_layout add_patch imshow savefig Rectangle len load infer_flip bbox_vote vis_detections print TestBaseTransform visual_threshold infer IMREAD_COLOR eval trained_model img_root load_state_dict row_stack build_ssd imread cuda len MaxPool2d Conv2d enumerate enumerate enumerate enumerate print repr detect_face flip zeros shape sorted det_dir write cuda range flush makedirs data Variable size astype float32 range unsqueeze array permute resize append Tensor numpy cuda net column_stack detect_face row_stack detect_face range row_stack len minimum maximum delete row_stack tile zeros sum max floor ceil str flip_test save_folder row_stack cuda open multi_scale_test_pyramid pull_event TestBaseTransform len range format multi_scale_test detect_face bbox_vote pull_anno write_to_txt print pull_image makedirs append FloatTensor astype float32 astype float32 clamp size min expand max intersect expand_as decode squeeze_ size jaccard center_size index_fill_ encode max range squeeze_ size jaccard index_fill_ point_form encode max range squeeze_ byte sort size jaccard index_fill_ eq point_form encode sum max range log cat max mul sort new clamp index_select resize_as_ long Parameter data items isinstance copy_ state_dict DetNet load print load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict minimum clip maximum intersect | <img src="imgs/DSFD_logo.PNG" title="Logo" width="300" /> ## Update * 2019.04: Release pytorch-version DSFD inference code. * 2019.03: DSFD is accepted by CVPR2019. * 2018.10: Our DSFD ranks No.1 on [WIDER FACE](http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/WiderFace_Results.html) and [FDDB](http://vis-www.cs.umass.edu/fddb/results.html) ## Introduction <p align='center'> <img src='./imgs/dsfd_video.gif' width=1000'/> </p> In this repo, we propose a novel face detection network, named DSFD, with superior performance over the state-of-the-art face detectors. You can use the code to evaluate our DSFD for face detection. | 1,047 |
Tenfleques/mot | ['multiple object tracking'] | ['Simple Online and Realtime Tracking'] | search_gnu_tap.py index_images.py tracker.py hashing.py utils/utils.py object_tracker.py search.py utils/datasets.py utils/parse_config.py openface/align_dlib.py models.py openface/torch_neural_net.lutorpy.py openface/helper.py person_tracker.py openface/data.py openface/__init__.py sort.py openface/torch_neural_net.py featurehash hammingDistance feature_distance dhash convert_hash euclidean main createVPTree YOLOLayer create_modules Darknet EmptyLayer detect_image processDetections trackPerson detect_image main searchSimilarImages KalmanBoxTracker iou Sort convert_bbox_to_z associate_detections_to_trackers convert_x_to_bbox parse_args PersonTracker AlignDlib iterImgs Image mkdirP TorchNeuralNet TorchNeuralNet ImageFolder ListDataset parse_data_config parse_model_config compute_ap build_targets bbox_iou_numpy to_categorical weights_init_normal load_classes bbox_iou non_max_suppression COLOR_BGR2GRAY print flatten resize cvtColor align print getLargestFaceBoundingBox waitKey imshow resize forward get list print VPTree write dhash dumps close open convert_hash list_images append imread keys enumerate createVPTree pop int YOLOLayer Sequential ZeroPad2d MaxPool2d add_module Conv2d ModuleList EmptyLayer Upsample append BatchNorm2d LeakyReLU sum enumerate unsqueeze_ Variable Compose min type float round get sorted list get_all_in_range VPTree dhash convert_hash append keys len int str putText FONT_HERSHEY_SIMPLEX copy shape rectangle trackPerson max time format sorted get_all_in_range print get format searchSimilarImages print waitKey imshow imread len minimum maximum float sqrt linear_assignment iou concatenate reshape append zeros empty enumerate add_argument ArgumentParser FloatTensor walk splitext makedirs join rstrip strip open startswith append split dict strip split open data normal_ __name__ constant_ concatenate size maximum sum range clamp min max minimum eps expand_dims maximum data sort new squeeze size shape unsqueeze cuda unique bbox_iou append max is_cuda cat enumerate int fill_ FloatTensor ones concatenate size range unsqueeze bbox_iou zeros argmax log | # PyTorch Object Detection and Tracking and Re-entry Object detection and tracking across video frames, re-entry supported References: 1. YOLOv3: https://pjreddie.com/darknet/yolo/ 2. Erik Lindernoren's YOLO implementation: https://github.com/eriklindernoren/PyTorch-YOLOv3 3. YOLO paper: https://pjreddie.com/media/files/papers/YOLOv3.pdf 4. SORT paper: https://arxiv.org/pdf/1602.00763.pdf 5. Alex Bewley's SORT implementation: https://github.com/abewley/sort | 1,048 |
TestSelection/TestSelection | ['autonomous driving'] | ['Guiding Deep Learning System Testing using Surprise Adequacy'] | exp_models/imagenet/__init__.py activeTraining.py exp_models/imagenet/imagenet_utils.py train_original.py utils/sa.py exp_models/mnist/letnet.py generateAdvImgs.py exp_models/mnist/deepxplore.py exp_models/mnist/mlp.py utils/tools.py exp_models/imagenet/inception_v3.py exp_models/imagenet/vgg19.py utils/activateLearning.py splitting_Training_dataset.py utils/ncoverage.py utils/load_data.py utils/convergence.py utils/attacker.py exp_models/cifar/VGG16_Clipped.py exp_models/cifar/Network_in_Network.py exp_models/imagenet/resnet50.py exp_models/imagenet/vgg16.py count_sampleNumOfEachClass selectData compute_shuffle_idx build_model_nonBn color_preprocessing scheduler_nonBn build_model_bn train scheduler_bn train_VGG16 VGG16_clipped _obtain_input_shape preprocess_input decode_predictions _preprocess_symbolic_input _preprocess_numpy_input InceptionV3 conv2d_bn preprocess_input identity_block ResNet50 conv_block VGG16 VGG19 correct_pad get_keras_submodule set_keras_submodules get_submodules_from_kwargs train Model3_deepXplore train scheduler build_model mlp train getSamplesByP getSamplesLSA getNeuroCover getSamplesByVar computeVariancescore getSamplesSilhoutte computeKLScore computeSilhouttescore computeDSAscore getSamplesDSA getSamples getSamplesRandom computeLSAscore getLabelHist selectData getKLDiverge attackFGSM attackDeepFool test_foobox attackJSMA attackCWl2 attackBIM konvrate konvord getData split_data initializeData spli_toy_data NCoverage fetch_dsa _get_train_target_ats warn find_closest_at Colors get_ats fetch_lsa infog find_min_at compute_roc get_sc fetch_sihoutete _get_kdes info _get_saved_path _get_lsa compute_roc_auc _aggr_output fail compute_avg var_prediction getKLandHistVarScore_Cifar10 getKLandHistVarScore_MNIST loadTest_cifar10 computeKL countGroup load_model getSADLScore_Cifar10 getGroups_cifar10 getLayerIndex prob_mean getADVimagesScore_Cifar10 boolean_string compute2DHistGroup getCifar_VoteLabel predict getMNIST_VoteLabel distcorr oneHot2Int setWeights getGroups_mnist getSurface_cifar generateNoiseAdversialExample getADVimgs_Cifar getADVimages_MNIST computeCor get_coverage getADVimageScore_MNIST get_Silhouettecoverage getSADLSocre_MNISTFashion var_mean pop list sum keys range values arange shuffle mkdir save getData format evaluate print reshape squeeze info getData astype range GlobalAveragePooling2D Lambda Sequential SGD add MaxPooling2D Conv2D Activation BatchNormalization compile Dropout GlobalAveragePooling2D Lambda Sequential SGD add MaxPooling2D Conv2D Activation compile Dropout print fit_generator summary flow CSVLogger getData ModelCheckpoint ImageDataGenerator LearningRateScheduler fit layers arange flow set_weights get_weights VGG16 fit_generator summary ImageDataGenerator LearningRateScheduler compile VGG16_clipped print fit dict CSVLogger getData ModelCheckpoint len Lambda Sequential add Dense MaxPooling2D Convolution2D BatchNormalization Flatten floatx astype get_submodules_from_kwargs dtype constant get_submodules_from_kwargs cast bias_add image_data_format ndarray isinstance get_submodules_from_kwargs get_file sort append get_submodules_from_kwargs str warn _obtain_input_shape get_file concatenate get_source_inputs get_submodules_from_kwargs Model conv2d_bn load_weights Input range str add str add _obtain_input_shape conv_block get_file get_source_inputs warn get_submodules_from_kwargs Model load_weights convert_all_kernels_in_model identity_block Input _obtain_input_shape get_file get_source_inputs get_submodules_from_kwargs Model load_weights convert_all_kernels_in_model Input _obtain_input_shape get_file get_source_inputs get_submodules_from_kwargs Model load_weights convert_all_kernels_in_model Input get keys isinstance Lambda Sequential add Dense MaxPooling2D Convolution2D Activation Flatten Dropout format shape Model3_deepXplore compile compile Lambda Sequential SGD add Dense MaxPooling2D Conv2D Flatten build_model Lambda Sequential add Dense Dropout mlp evaluate reshape mean var predict var asarray ones prob_mean squeeze extend divide mean compute2DHistGroup argmax selectData range predict len selectData arange shuffle len print shape fetch_dsa asarray computeDSAscore selectData fetch_dsa computeLSAscore selectData fetch_sihoutete argsort computeSilhouttescore selectData computeKL predict computeKL asarray ones prob_mean extend argsort compute2DHistGroup selectData range predict len argsort computeKL selectData predict batch_kmnc batch_diffScore concatenate shuffle batch_nc any selectData squeeze argsort argmax selectData predict FGSM set_learning_phase print extend tqdm KerasModel attack append range len set_learning_phase asarray extend tqdm KerasModel attack append range DeepFoolAttack len set_learning_phase CarliniWagnerL2Attack print extend tqdm KerasModel attack append range len set_learning_phase print extend tqdm KerasModel attack append SaliencyMapAttack range len set_learning_phase BIM print extend tqdm KerasModel attack append range len CarliniWagnerL2Attack set_learning_phase imagenet_example KerasModel ResNet50 attack print sum log len exp arange print polyfit log len to_categorical astype load_data arange shuffle initializeData isdir save getData makedirs getData initializeData save join zip print predict_classes map Model array save info append Pool predict norm get_ats print _get_train_target_ats tqdm find_closest_at info append enumerate print find_min_at _get_train_target_ats tqdm info append max compute_avg enumerate items compute_avg norm format print gaussian_kde transpose infog delete warn tqdm append range len delete _get_lsa print _get_train_target_ats _get_kdes tqdm info append enumerate digitize linspace auc roc_curve concatenate fit array compute_roc concatenate digitize linspace digitize linspace atleast_2d atleast_1d mean sqrt pdist float sum squareform kendalltau squeeze normal format arange print sqrt shape len var squeeze mean zeros argmax range mean var argmax mean var var arange var_mean len item enumerate layers load_model name print dict get_weights set_weights var entropy squeeze repeat zeros sum range len mlp VGG16_clipped VGG16 dropout build_model InceptionV3 weight_decay load_weights ResNet50 Model3_deepXplore build_model_bn VGG19 Input compile item item computeKL item load computeKL load get load get item item load astype array array item astype range len countGroup prob_mean linspace item var_mean load countGroup prob_mean linspace var_mean getSADLScore_Cifar10 prob_mean getKLandHistVarScore_Cifar10 astype linspace item range var_mean range astype linspace | # Test Data Selection based on Uncertainty Matrix - Why ? - Deep Learning is a kind of statistical learning. To test the the deep learning model, we need stand at an abstract point. Assuming the true data generation follows data generation `G`, you want to know how far you model can fit the true generation distribution ( `accuracy` ), and how stable your model can fit the true generation in the different scnerios (`uncertainty` ). We regard the model setting and trainning strategies as a `black box`. We use the uncertainty and covrage based on the outputs of model to test Deep Learning. The project studies these diffrent test metrics for this problem. We use them to socre data, select them, and show how to use the results to improve Deep Learning. Befotre testing, you should make sure that the output of your model currently have the similar distribution with the training dataset you use, e.g., good accuracy or low loss. Otherwise, it makes no much sense to test a bad performance model. - Implementation - python 3.6 | 1,049 |
ThayaFluss/cnl | ['stochastic optimization'] | ['Cauchy noise loss for stochastic optimization of random matrix models via free deterministic equivalents'] | src/env_logger.py temp/test_plots.py src/demo_rank_estimation.py src/cw.py src/vbmf/example.py src/matrix_util.py src/tests/test_demo_rank_estomation.py src/tests/test_sc.py src/tests/test_cw.py src/timer.py src/spn_c2.py src/tests/test_psvd.py src/demo_cauchy.py temp/sympy_util.py src/vbmf/vbmf.py src/random_matrices.py src/tests/test_cython.py src/vbmf/example_spn.py src/validate_train_cw.py src/cython_setup.py src/cauchy_demo.py src/validate_train_spn.py src/psvd.py src/train_fde.py src/vbmf/validate_vbmf_spn.py src/spn.py draw_loss_heatmap plot_diag_A CompoundWishart plot_true_sample_and_model_FDE singular_values generate_random_Toeplitz plot_evs nsubtrace random_Toeplitz compare_moments rectangular_diag get_moments_by_fourier ntrace L2_distance get_moments get_sum z_value_spn psvd_cnl rank_estimation signal_plus_noise random_from_diag haar_unitary signal_plus_noise_symm Ginibre SemiCircular Descrete Timer get_learning_rate train_cw train_fde_spn KL_divergence get_minibatch options _mean_and_std test_optimize test_scale_balance options test_sc _mean_and_std rank_recovery_baseline test_optimize TestCW TestCython TestDemoRankEstimation TestPSVD TestSC options options validate_vbmf_spn VBMF VBMF2 set_yticklabels diag linspace heatmap set_title ones set_xlabel identity savefig legend range format inf set_xticklabels copy info SemiCircular set_ylabel zeros array loss show loss SemiCircular format plot ones square_density savefig linspace figure info legend range array diag set_params format plot print rc SC ESD uniform hist savefig linspace figure legend density_subordinaiton makedirs zeros range complex transpose ntrace zeros range ERROR_DEBUG dot trace append array range print sum print transpose dot trace float list print transpose dot real H len eigh clf Timer list getcwd shape tic title savefig append format singular_values eig total_time toc print xlabel hist array makedirs toc format print size total_time tic Timer append range print transpose dot get_moments_by_fourier range get_moments zeros range randn svd T int format z_value_spn train_fde_spn info max svd T int format train_fde_spn info max rectangular_diag svd sum sqrt sqrt randn svd Ginibre rectangular_diag haar_unitary uniform range zeros range Ginibre conj dot Ginibre rvs sort exit shuffle choice flatten zeros array range exit range len sign clf linspace density_subordinaiton abs max exit savefig legend get_minibatch append sum regularization_grad_loss range format asarray plot grad_loss debug hstack close copy ESD grad_loss_subordination sqrt rectangular_diag update_params info trange int var SemiCircular set_params get_learning_rate norm print sort min makedirs dict hist figure allclose zeros array len toc SemiCircular format info loss_subordination total_time ESD tic Timer sign clf linspace abs max exit uniform title density ylim legend savefig get_minibatch append range format plot grad_loss debug close copy ESD sqrt info trange CompoundWishart int norm get_learning_rate sort rc makedirs dict hist figure forward_iter zeros array len format ArgumentParser add_argument mean sqrt train_cw clf setLevel addHandler uniform savefig legend range setFormatter format asarray plot close ESD mean removeHandler info CompoundWishart INFO FileHandler ext int sort rc Formatter figure array makedirs arange clf num_test round open use max_epoch ylabel title test_optimize ylim savefig legend minibatch append dim range format plot _mean_and_std close p_dim info ext xlabel rc write now figure makedirs svd signal_plus_noise minibatch debug sqrt train_fde_spn random_from_diag average arange validate_vbmf_spn clf num_test base_lr exists open use ones len max_epoch exit ylabel title test_optimize savefig legend append range dump format errorbar plot debug _mean_and_std close info trange empty base_scale ext int xlabel reshape rc write now figure makedirs clf num_test svd use ylabel title savefig legend append dim range format asarray errorbar size _mean_and_std close sqrt p_dim info ext xlabel rc random_from_diag figure get_rank_analytically VBMF2 signal_plus_noise random_from_diag | # CNL
### Cauchy Noise Loss
This is a tool for optimization of two random matrix models; compound Wishart model and signal-plus-noise (information-plus-noise) model.
## Description
The compound Wishart model is
$W = Z^TAZ$,
where $Z$ is a (real or complex) $p \times d$ Ginibre random matrix (i.e. whose entries are i.i.d. and distributed with $N(0, \sqrt(1/d)))$, and A is $p \times p$ deterministic self-adjoint matrix.
| 1,050 |
TheConfused1/Real-time-neural-style-transfer | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | main.py | # Real-time-neural-style-transfer The project is based on real time conversion of images from webcam, to artistic style of paintings using deep neural networks. The inspiration for artistic neural stlye transfer comes from the paper https://arxiv.org/abs/1508.06576 by Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, while the code derives heavily from the implementation of Adrian Rosebrock for the same. To run the code, execute the command: python main.py --models models To change the style: press N To take a snap: press C (the output window size and its location on the screen may vary over different machines. It is required to adjust these in case the output is not as expected.) To quit the window: press Q | 1,051 |
TheHedgeify/DagstuhlGAN | ['snes games'] | ['Evolving Mario Levels in the Latent Space of a Deep Convolutional Generative Adversarial Network'] | marioaiDagstuhl/src/python/research/agents/forwardrandomagent.py marioaiDagstuhl/src/python/research/agents/marioagent.py marioaiDagstuhl/src/python/research/scripts/testnetagent.py marioaiDagstuhl/src/pytorch/game_gan.py marioaiDagstuhl/src/python/research/scripts/agentmeasure.py marioaiDagstuhl/src/python/research/client/marioenvironment.py marioaiDagstuhl/src/python/research/agents/__init__.py marioaiDagstuhl/src/python/research/client/__init__.py marioaiDagstuhl/src/python/research/utils/__init__.py marioaiDagstuhl/src/python/research/scripts/reproducejulian.py marioaiDagstuhl/src/python/competition/tasks/mariotask.py marioaiDagstuhl/src/python/competition/utils/__init__.py marioaiDagstuhl/src/python/competition/utils/cmdlineoptions.py pytorch/lhc_sample.py marioaiDagstuhl/src/python/competition/experiments/episodicexperiment.py marioaiDagstuhl/src/python/research/tasks/__init__.py marioaiDagstuhl/src/python/competition/utils/bitsTest.py pytorch/generator_ws.py marioaiDagstuhl/src/python/competition/agents/__init__.py pytorch/test_generator.py marioaiDagstuhl/src/python/research/agents/forwardagent.py marioaiDagstuhl/src/python/research/utils/cmdlineoptions.py marioaiDagstuhl/src/python/competition/ipymario.py marioaiDagstuhl/src/python/research/client/tcpenvironment.py marioaiDagstuhl/src/python/competition/agents/marioagent.py marioaiDagstuhl/src/python/competition/utils/dataadaptor.py marioaiDagstuhl/src/python/competition/tasks/episodictask.py pytorch/models/dcgan.py marioaiDagstuhl/src/python/competition/client/environment.py marioaiDagstuhl/src/python/competition/client/__init__.py marioaiDagstuhl/src/python/competition/tasks/task.py marioaiDagstuhl/src/python/research/agents/mdrnnagent.py marioaiDagstuhl/src/python/competition/tasks/__init__.py marioaiDagstuhl/src/python/competition/agents/forwardagent.py marioaiDagstuhl/src/python/research/scripts/evolvemrnnmario.py pytorch/wcs.py marioaiDagstuhl/src/python/competition/client/marioenvironment.py marioaiDagstuhl/src/python/research/utils/bitsTest.py marioaiDagstuhl/src/python/research/tasks/mariotask.py pytorch/models/mlp.py marioaiDagstuhl/src/python/research/utils/dataadaptor.py marioaiDagstuhl/src/pytorch/Comm.py marioaiDagstuhl/src/python/competition/experiments/experiment.py marioaiDagstuhl/src/python/competition/experiments/__init__.py marioaiDagstuhl/src/python/competition/client/tcpenvironment.py marioaiDagstuhl/src/python/research/agents/networkagent.py marioaiDagstuhl/src/python/competition/agents/forwardrandomagent.py marioaiDagstuhl/src/python/competition/client/client.py marioaiDagstuhl/src/python/research/scripts/ipymario.py pytorch/gan_optimize.py pytorch/main.py marioaiDagstuhl/src/python/research/client/client.py main ForwardAgent ForwardRandomAgent MarioAgent Client Environment MarioEnvironment TCPEnvironment EpisodicExperiment Experiment EpisodicTask MarioTask Task show CmdLineOptions show decode extractObservation ForwardAgent ForwardRandomAgent MarioAgent MarioMdrnnNetwork SimpleMdrnnAgent MdrnnAgent ModuleMarioAgent SimpleModuleAgent MLPMarioAgent SimpleMLPMarioAgent Client MarioEnvironment TCPEnvironment main combinedScore score writeAgentNet oneLevel main main main combinedScore MarioTask show CmdLineOptions decode extractObservation _netG _netD weights_init gan_fitness_function solid_blocks_fraction ground_blocks_fraction combine_images gan_maximse_title_type combine_images tiles2image weights_init combine_images DCGAN_D_nobn DCGAN_G_nobn DCGAN_G DCGAN_D MLP_D MLP_G ForwardAgent MarioTask doEpisodes name print EpisodicExperiment property print int range print empty range len decode int print append float empty range split MarioTask doEpisodes print EpisodicExperiment range sleep combinedScore range ForwardRandomAgent writeToFile str round module MarioTask doEpisodes name print reward reset sleep print writeAgentNet oneLevel writeToFile inputbuffer module SimpleMLPMarioAgent name SimpleMdrnnAgent normal_ __name__ fill_ int sqrt ceil zeros float enumerate generator argmax view Variable numpy array len generator Variable array view sum len | # MarioGAN This project allows for the unsupervised learning of a Generative Adversarial Network (GAN) that understands the structure of Super Mario Bros. levels. The model is trained on actual Mario levels from the [Video Game Level Corpus](https://github.com/TheVGLC/TheVGLC). The trained model is capable of generating new level segments with the input of a latent vector, and these segments can be stitched together to make complete levels. In order to find the best level segments within this latent space, the evolutionary algorithm Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is used to find latent vectors producing level segments that either optimize some sort of tile distribution or result in a particular level of performance by an artificial agent. The resulting system helps discover new levels in the space of examples created by human experts. | 1,052 |
TheLethargicOwl/Single-Image-De-Raining-Keras | ['rain removal'] | ['Image De-raining Using a Conditional Generative Adversarial Network'] | ID-CGAN.py data_loader.py DataLoader | # Single-Image-De-Raining-Keras Implemented Image De-raining Using a Conditional Generative Adversarial Network using Keras [[Paper Link](https://arxiv.org/abs/1701.05957)] Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. In this project I implemented the Paper using Keras. It is kept in mind that the de-rained result should be indistinguishable from its corresponding clear image . @article{zhang2017image, title={Image De-raining Using a Conditional Generative Adversarial Network}, author={Zhang, He and Sindagi, Vishwanath and Patel, Vishal M}, journal={arXiv preprint arXiv:1701.05957}, | 1,053 |
Theys96/lottery-ticket-hypothesis | ['network pruning'] | ['The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks'] | code/LotteryTicket_lr.py code/LotteryTicket_baseline_lr.py train_model get_layer_density display_image read_mnist get_model_density setup_model train_model get_layer_density display_image read_mnist get_model_density setup_model title imshow squeeze flatten flatten Sequential compile TensorBoard EarlyStopping fit set_weights UpdatePruningStep | # Theys96/lottery-ticket-hypothesis Experimentation setup for the "Lottery Ticket" hypothesis for neural networks. \ Used for our report titled ["An Analysis of Neural Network Pruning in Relation to the Lottery Ticket Hypothesis"](report/report.pdf) for a course at the University of Groningen. --- The lottery ticket hypothesis refers to an idea relating to neural network pruning, as presented by Frankle and Carbin (J. Frankle and M. Carbin. The lottery ticket hypothesis: Training pruned neural networks. CoRR, abs/1803.03635, 2018. URL [http://arxiv.org/abs/1803.03635](http://arxiv.org/abs/1803.03635). Accessed: February 2020.). With the code in this repository, we experimented on this hypothesis with a LeNet model on the MNIST dataset. Consider the following figure.  We can interpret the figure above as this: When leveraging the "lottery ticket" principle after pruning the selected neural network model (LeNet) down to -say- 5% density, we achieve a higher test accuracy (compared to the original fully dense model or another pruning approach). The subject dataset is MNIST, by the way. Check out our complete [report](report/report.pdf) for more information. --- | 1,054 |
ThierryBarros/reproduction-textrank | ['text summarization'] | ['WikiHow: A Large Scale Text Summarization Dataset'] | process.py | # TextRank Reproduction Reprodução do artigo TextRank: Bringing order to text O código para pré-processamento, aplicação do modelo e avaliação está no arquivo **Summarizer.ipynb** # WikiHow-Dataset WikiHow: A Large Scale Text Summarization Dataset WikiHow is a new large-scale dataset using the online WikiHow (http://www.wikihow.com/) knowledge base <sup>[*](#footnote1)</sup>. The dataset is introduced in https://arxiv.org/abs/1810.09305. Please refer to the paper for more information regarding the dataset and its properties. Each article consists of multiple paragraphs and each paragraph starts with a sentence summarizing it. By merging the paragraphs to form the article and the paragraph outlines to form the summary, the resulting version of the dataset contains more than 200,000 long-sequence pairs. There are two separate data files containing the articles and their summaries: The wikihowAll.csv file consisting of the concatenation of all paragraphs as the articles and the bold lines as the reference summaries.: |Part|Description| | 1,055 |
ThinhNgVhust/ocsvm | ['anomaly detection'] | ['Anomaly Detection using One-Class Neural Networks', 'Robust, Deep and Inductive Anomaly Detection'] | src/models/RCAE_Cifar10.py src/data/modules.py src/data/base.py src/data/main.py src/utils/misc.py src/utils/visualization/filters_plot.py models/LSTM_AE_OCSVM_models.py models/CAE_OCSVM_models.py models/synthetic_models.py src/data/cifar10.py models/sklearn_OCSVM_explicit_model.py src/utils/visualization/line_plot.py models/mnist_ae2.py models/models.py src/models/Fake_Noise_FF_NN.py models/keras_tl_oc_nn_cifar.py src/models/FF_NN.py models/tf_OneClass_NN_model_plot_scores.py models/plot_syn_scores.py models/sklearn_isolation_forest.py models/spam_models.py src/models/Deep_SVDD.py setup.py models/OCSVM_dogs_vs_cats.py models/plot_mnist_scores.py models/OCSVM_cifar.py models/sklearn_OCSVM_explicit_plot_scores.py models/tf_OneClass_NN_model.py models/DBN2_OCSVM_models.py models/OneClass_NN_model.py models/img_to_vec.py src/utils/visualization/five_number_plot.py src/models/svm.py models/AE_SVDD_models.py models/tflearn_OneClass_NN_model_plot_scores.py models/plot_scores.py models/test2.py models/plot_pfam_scores.py src/data/preprocessing.py src/models/Lenet.py src/models/RCAE_Verion1.py src/utils/visualization/mosaic_plot.py models/mnist_models.py models/cifar10vgg.py models/plot_cifar_scores.py src/data/iterator.py src/utils/pickle.py src/models/OC_NN.py models/dataset.py src/models/RCAE.py models/tf_OneClass_CNN_model.py src/models/Copy of OneClass_SVDD.py models/RDA_models.py src/models/ocnn.py models/sklearn_OCSVM_model.py src/models/OneClass_SVDD.py src/models/isoForest.py src/models/ocsvmSklearn.py src/utils/__init__.py src/data/Copy of mnist.py models/OneClass_NN_model_plot_scores.py src/utils/monitoring.py models/RCAE_models.py src/config.py src/data/__local__.py models/plot_spam_scores.py models/tf_Cifar_OC_NN_Models.py models/test.py src/debug/debug_mnist.py models/fake_news_models.py models/plot_fake_news_scores.py src/models/kernels/__init__.py src/models/kernels/weighted_degree.py src/models/SupervisedNN.py src/models/fakeNoiseNN.py models/OCSVM_Autoencoder_model.py models/plot_synthetic_scores_old.py models/pfam_models.py src/models/ocnnFakeNoise.py models/test_without_OC_nn.py src/data/GTSRB.py src/utils/visualization/diagnostics_plot.py models/plot_synthetic_scores.py models/tflearn_OneClass_NN_model.py src/debug/debug_ocnn.py models/plot_syn_scores_with_subplots.py models/cifar_models.py src/models/kernels/degree.py src/utils/diag.py models/usps_models.py models/sklearn_OCSVM_rpca.py src/models/config.py src/utils/log.py src/data/make_dataset.py src/data/mnist.py models/r_pca.py src/models/kde.py models/plot_usps_scores.py src/utils/visualization/scatter_plot.py src/utils/visualization/images_plot.py src/models/DCAE.py prepare_cifar_data_with_anamolies cifar10vgg Img2Vec plot_decision_scores_One_Class_NN_explicit plot_decision_scores_CIFAR_10 plot_decision_scores_CIFAR plot_decision_scores_FAKE_NEWS plot_decision_scores_pfam plot_decision_scores_USPS plot_decision_scores plot_decision_scores_SPAM plot_decision_scores plot_decision_scores_SYNTHETIC plot_decision_scores_USPS plot_decision_scores plot_decision_scores_SYN_new plot_decision_scores plot_decision_scores_Synthetic plot_decision_scores_SYN plot_decision_scores plot_decision_scores_USPS plot_decision_scores_USPS_new plot_decision_scores func_getDecision_Scores_sklearn_OCSVM_explicit ocsvm_obj sklearn_OCSVM_explicit_sigmoid relu sklearn_OCSVM_explicit_linear ocsvm_grad svmScore dRelu plot_decision_scores_sklearn_OCSVM_explicit plot_decision_scores_tflearn_OneClass_NN plot_decision_scores_tf_One_Class_NN Configuration DataLoader CIFAR_10_DataLoader debug_visualise_anamolies_detected load_mnist_images MNIST_DataLoader RcaeParamSaver load_mnist_labels readTrafficSigns_asnparray PIL2array GTSRB_DataLoader debug_visualise_anamolies_detected readTrafficSigns plot_cifar indices_generator iterate_batches load_dataset CreateDataSet load_mnist_images MNIST_DataLoader RcaeParamSaver load_mnist_labels addConvModule crop_to_square learn_dictionary extract_norm_and_out zca_whitening center_data rescale_to_unit_interval normalize_data pca gcn make_unit_norm global_contrast_normalization downscale Configuration Adjust_svdd_Radius OneClass_SVDD DCAE Deep_SVDD SupervisedFakeNoiseNN Fake_Noise_FF_NN FF_NN IsoForest KDE LeNet OCNN OCNNFakeNoiseNN OCSVM OC_NN Adjust_svdd_Radius OneClass_SVDD debug_visualise_anamolies_detected RCAE_AD debug_visualise_anamolies_detected RCAE_AD debug_visualise_anamolies_detected RCAE_AD SupervisedNN SVM degree_kernel weighted_degree_kernel NNetParamDiag NNetDataDiag log_isoForest AD_Log Log log_NeuralNet log_exp_config log_AD_results log_SVM log_KDE flush_last_line get_five_number_summary performance print_obj_and_acc ae_performance load_isoForest dump_svm load_svm dump_kde load_weights load_kde dump_isoForest dump_weights plot_random_reconstructions plot_parameter_norms plot_diagnostics plot_ae_diagnostics plot_accuracy plot_scores plot_objective_with_parts plot_representation_norms plot_auc plot_center_c_diagnostics plot_objectives plot_filters plot_five_number_summary plot_outliers_and_most_normal plot_line plot_mosaic plot_2Dscatter prepare_cifar_data_with_anamolies cifar10vgg Img2Vec plot_decision_scores_One_Class_NN_explicit plot_decision_scores_CIFAR_10 plot_decision_scores_CIFAR plot_decision_scores_FAKE_NEWS plot_decision_scores_pfam plot_decision_scores_USPS plot_decision_scores plot_decision_scores_SPAM plot_decision_scores plot_decision_scores_SYNTHETIC plot_decision_scores_USPS plot_decision_scores plot_decision_scores_SYN_new plot_decision_scores plot_decision_scores_Synthetic plot_decision_scores_SYN plot_decision_scores plot_decision_scores_USPS plot_decision_scores_USPS_new plot_decision_scores func_getDecision_Scores_sklearn_OCSVM_explicit ocsvm_obj sklearn_OCSVM_explicit_sigmoid relu sklearn_OCSVM_explicit_linear ocsvm_grad svmScore dRelu plot_decision_scores_sklearn_OCSVM_explicit plot_decision_scores_tflearn_OneClass_NN plot_decision_scores_tf_One_Class_NN Configuration DataLoader CIFAR_10_DataLoader debug_visualise_anamolies_detected load_mnist_images MNIST_DataLoader RcaeParamSaver load_mnist_labels readTrafficSigns_asnparray PIL2array GTSRB_DataLoader debug_visualise_anamolies_detected readTrafficSigns plot_cifar indices_generator iterate_batches load_dataset CreateDataSet load_mnist_images MNIST_DataLoader RcaeParamSaver load_mnist_labels addConvModule crop_to_square learn_dictionary extract_norm_and_out zca_whitening center_data rescale_to_unit_interval normalize_data pca gcn make_unit_norm global_contrast_normalization downscale Configuration Adjust_svdd_Radius OneClass_SVDD DCAE Deep_SVDD SupervisedFakeNoiseNN Fake_Noise_FF_NN FF_NN IsoForest KDE LeNet OCNN OCNNFakeNoiseNN OCSVM OC_NN Adjust_svdd_Radius OneClass_SVDD debug_visualise_anamolies_detected RCAE_AD debug_visualise_anamolies_detected RCAE_AD debug_visualise_anamolies_detected RCAE_AD SupervisedNN SVM degree_kernel weighted_degree_kernel NNetParamDiag NNetDataDiag log_isoForest AD_Log Log log_NeuralNet log_exp_config log_AD_results log_SVM log_KDE flush_last_line get_five_number_summary performance print_obj_and_acc ae_performance load_isoForest dump_svm load_svm dump_kde load_weights load_kde dump_isoForest dump_weights plot_random_reconstructions plot_parameter_norms plot_diagnostics plot_ae_diagnostics plot_accuracy plot_scores plot_objective_with_parts plot_representation_norms plot_auc plot_center_c_diagnostics plot_objectives plot_filters plot_five_number_summary plot_outliers_and_most_normal plot_line plot_mosaic plot_2Dscatter where concatenate add_subplot tight_layout title hist figure legend subplots set_title suptitle hist legend setp subplots set_title suptitle hist legend setp subplots set_title suptitle title hist legend setp subplots set_title suptitle hist legend setp subplots set_title suptitle title hist legend setp items title hist legend DataFrame subplots set_title suptitle title hist legend setp subplots set_title suptitle hist legend setp subplots arange set_title hist legend setp xticks subplots title hist savefig legend xticks range yticks subplots set_title suptitle hist legend setp subplots title hist savefig legend xticks range yticks ones mean sum svmScore relu append mean svmScore dRelu seed normal minimize print check_grad svmScore seed normal minimize print check_grad svmScore sklearn_OCSVM_explicit_sigmoid sklearn_OCSVM_explicit_linear add_subplot tight_layout title hist figure legend add_subplot tight_layout title hist figure legend add_subplot tight_layout title hist figure legend float32 floatX str list uint8 ndarray imsave print reshape astype shape range astype float32 int sum format reader rollaxis len close float32 resize append zeros imread next range open int format asarray reader ANTIALIAS close PIL2array resize append next open uint8 subplots divmod astype axis tight_layout imshow savefig range indices_generator len int arange min shuffle ceil check_all load_data StandardScaler leaky_relu addConvLayer addDropoutLayer addLeakyReLU GlorotUniform Constant addMaxPool addReLU addUpscale mean reshape reshape std max min array newaxis array svd T reshape mean shape dot sqrt prod diag newaxis print reshape PCA transform fit shape min fromarray rollaxis newaxis print reshape ones any zeros sum len print reshape components_ MiniBatchDictionaryLearning shuffle choice shape transform len StandardScaler StandardScaler cifar10_normal ones logical_and shape newaxis sum range ones zeros logical_and shape newaxis sum range zca_whitening out_frac mnist_normal pca gcn mnist_outlier open seed str mnist_rep_dim cifar10_normal format mnist_bias cifar10_rep_dim close gtsrb_rep_dim unit_norm_used mnist_architecture cifar10_architecture cifar10_outlier weight_dict_init write cifar10_bias leaky_relu batch_size R_update_solver c_mean_init_n_batches get_value ae_lr_drop block_coordinate lr_drop weight_decay ae_loss open lr_decay_after_epoch dropout_architecture lr_decay hard_margin R_update_lp_obj R_update_scalar_method reconstruction_penalty format pretrain lr_drop_in_epoch dropout close center_fixed ae_weight_decay k_update_epochs lr_drop_factor use_batch_norm warm_up_n_epochs ae_lr_drop_factor write c_mean_init ae_lr_drop_in_epoch svm_GridSearchCV format write close open format write close kde_GridSearchCV open format write close open format test_time write close train_time round open write range flush percentile median min max print title format save_objective_and_accuracy softmax_loss floatX batch_size forward n_val get_epoch min n_train track_best_results flatten print_obj_and_acc save_diagnostics n_test save_network_diagnostics svdd_loss empty get_epoch batch_size n_val min n_train ae_forward save_ae_diagnostics flatten n_test empty trainable_layers print get_value all_layers dict pickle_filename isbatchnorm trainable_layers set_value print all_layers pickle_filename svdd_loss isbatchnorm print print print print print print plot_random_reconstructions reconstruction_penalty plot_parameter_norms plot_accuracy plot_scores plot_objective_with_parts plot_representation_norms svdd_loss plot_auc plot_center_c_diagnostics plot_objectives plot_scores plot_objective_with_parts plot_auc plot_objectives OrderedDict plot_line OrderedDict title plot_line OrderedDict plot_line _y_test OrderedDict plot_line _y_train _y_val OrderedDict plot_line trainable_layers str isdense isconv OrderedDict plot_line num_units range _y_test OrderedDict plot_five_number_summary _y_train _y_val _y_test OrderedDict plot_five_number_summary _y_train _y_val str n_train choice plot_mosaic forward auc_best_epoch str plot_mosaic get_value set_palette arange grid clf max percentile show ylabel ylim title savefig legend plot xlim zeros yscale xlabel min set_style median fill_between len str int _X_test _X_val n_val n_train _X_train argsort auc_best_epoch title load_data n_test plot_mosaic yscale show plot xlabel min grid ylabel set ylim title savefig clf legend zeros max len show int ones squeeze shape imshow set_visible floor title xrange savefig gca newaxis clf moveaxis zeros show set title scatter savefig clf legend | # Keras-Tensorflow Implementation of One Class Neural Networks. This repository provides a Keras-Tensorflow implementation of the One Class Neural Network method presented in our paper ”Anomaly Detection using One Class Neural Networks”. # Citations and Contact. You find a PDF of the **One Class Neural Network paper** at: https://arxiv.org/pdf/1802.06360.pdf If you use our work, please also cite the paper: ``` @article{chalapathy2018anomaly, title={Anomaly Detection using One-Class Neural Networks}, author={Chalapathy, Raghavendra and Menon, Aditya Krishna and Chawla, Sanjay}, | 1,056 |
ThomasZiegler/Efficient-Smoothing-of-Dilated-Convolutions | ['semantic segmentation'] | ['Efficient Smoothing of Dilated Convolutions for Image Segmentation'] | utils/label_utils.py utils/__init__.py plot_training_curve.py network.py dataset_cityscapes/generate_dataset_txt.py utils/write_to_log.py utils/image_reader.py main.py dilated.py model.py _dilated_conv2d _get_sigma _combinational_layer_learned _smoothed_dilated_conv2d_GI _gaussian_dilated_conv2d_oneLearned _get_c_vector _gaussian_dilated_conv2d_allLearned _smoothed_dilated_conv2d_SSC _combinational_layer_fix _gaussian_dilated_conv2d_fix _averaged_dilated_conv2d _regular_dilated_conv2d _decomposed_dilated_conv2d main del_all_flags configure write_all_flags Model Deeplab_v2 ResNet_segmentation get_log plot_epoch plot_iteration main printError read_images_from_disk ImageReader image_scaling read_labeled_image_list random_crop_and_pad_image_and_labels image_mirroring inv_preprocess decode_labels prepare_label write_log print exit value Variable depthwise_conv2d_native constant value exp value arange constant Variable conv3d squeeze reduce_sum add div meshgrid zeros expand_dims _get_sigma value arange exp constant Variable conv3d squeeze reduce_sum add div meshgrid zeros expand_dims value depthwise_conv2d_native _get_sigma constant value arange exp Variable conv3d squeeze _get_c_vector reduce_sum add div meshgrid zeros expand_dims depthwise_conv2d_native constant value arange exp Variable conv3d squeeze _get_c_vector reduce_sum add div meshgrid zeros expand_dims assign softmax value space_to_batch batch_to_space value space_to_batch batch_to_space value DEFINE_boolean DEFINE_integer DEFINE_float _parse_flags flags DEFINE_string __delattr__ keys _parse_flags _parse_flags str remove configure print add_argument write_all_flags Model reset_default_graph write_log option ArgumentParser parse_args range Session readlines close append float open show plot xlabel get_log ylabel title range len show plot xlabel get_log ylabel mean title append array range len print format exit join printError glob sort makedirs to_float resize_images to_int32 squeeze multiply stack random_uniform resize_nearest_neighbor expand_dims less stack boolean_mask reverse pad_to_bounding_box random_crop concat maximum shape cast set_shape append split open image_scaling concat image_mirroring read_file cast random_crop_and_pad_image_and_labels decode_png decode_jpeg split load new shape zeros array range enumerate uint8 astype shape zeros range | # Efficient Smoothing of Dilated Convolutions for Image Segmentation This is the code for reproducing the experiments of our project [Efficient Smoothing of Dilated Convolutions for Image Segmentation](https://arxiv.org/abs/1903.07992). The code is based on the source code of the paper [Smoothed Dilated Convolutions for Improved Dense Prediction](http://www.kdd.org/kdd2018/accepted-papers/view/smoothed-dilated-convolutions-for-improved-dense-prediction), see original README below. ## Authors * Konstantin Donhauser [donhausk at ethz.ch] * Manuel Fritsche [manuelf at ethz.ch] * Lorenz Kuhn [kuhnl at ethz.ch] * Thomas Ziegler [zieglert at ethz.ch] ## Changes compare to source Repo * Addition of our proposed pre-filters: | 1,057 |
Thrandis/EKFAC-pytorch | ['stochastic optimization'] | ['Optimizing Neural Networks with Kronecker-factored Approximate Curvature'] | ekfac.py kfac.py grad_wrt_kernel EKFAC KFAC transpose | # EKFAC and K-FAC Preconditioners for Pytorch This repo contains a Pytorch (0.4.1) implementation of the EKFAC and K-FAC preconditioners. If you find this software useful, please check the references below and cite accordingly! ### Presentation We implemented K-FAC and EKFAC as `preconditioners`. Preconditioners are similar Pytorch's `optimizer` class, with the exception that they do not perform the update of the parameters, but only change the gradient of those parameters. They can thus be used in combination with your favorite optimizer (we used SGD in our experiments). Note that we only implemented them for `Linear` and `Conv2d` modules, so they will silently skip all the other modules of your network. ### Usage Here is a simple example showing how to add K-FAC or EKFAC to your code: ```python # 1. Instantiate the preconditioner preconditioner = EKFAC(network, 0.1, update_freq=100) # 2. During the training loop, simply call preconditioner.step() before optimizer.step(). | 1,058 |
TianyuanYu/DistributionNet | ['person re identification'] | ['Robust Person Re-Identification by Modelling Feature Uncertainty'] | train_ReID_classifier.py ReID_preprocessing/preprocessing_factory.py eval_market_multi.py ReID_preprocessing/img_process_utils.py nets/resnet_v1.py dataset_utils.py dataset_factory.py model_deploy.py train_ReID_classifier_from_resnet50.py nets/resnet_utils.py utils.py train_ReID_classifier_con.py ReID_preprocessing/reid_preprocessing_fix.py nets/nets_factory.py _extract_feats eval_market main extract_features feat_aggregate _gather_clone_loss deploy _add_gradients_summaries _optimize_clone create_clones optimize_clones DeploymentConfig _sum_clones_gradients main main main get_network_fn Block fc_layer conv2d_same proj_2Din4_bn_relu subsample moving_average_cov_mat get_biases extra_fc stack_blocks_dense proj_feats_fn resnet_arg_scope net_decorr_reg weight_decorr_whiten_reg get_weights projecting_feats resnet_distributions_v1 linear resnet_v1_distributions_baseline_50 resnet_distributions_baseline_v1 resnet_v1_distributions_50 bottleneck resnet_v1_50 resnet_v1 _aspect_preserving_resize call_distort_image_inception call_distort_image_vgg process_images_vgg_for_eval fix_erase_images call_distort_image_inception_with_erase _upl_crop _upr_crop call_distort_image process_images_inception_for_eval _central_crop _resize _random_crop augment_images_vgg call_process_image_inception_for_eval _smallest_size_at_least _dr_crop _dl_crop rand_erase_images call_process_image_vgg_for_eval _crop _mean_image_subtraction augment_images_inception get_preprocessing preprocess_image _post_img_list preprocess_for_train preprocess_for_eval asarray batch_size mean vstack append range batch_size tuple Session run restore squeeze tolist len shape ceil append range feat_aggregate start_queue_runners latest_checkpoint config_eval_ckpt_path IsDirectory zip info float max_num_batches int print index global_variables_initializer split str format print mean eval_market append zeros extract_features invert asarray cumsum astype float32 argsort shape mean int32 append sum range set_verbosity INFO scope scalar _gather_clone_loss REGULARIZATION_LOSSES get_collection add_n _sum_clones_gradients len SUMMARIES get_collection set scope create_clones UPDATE_OPS append add_n zip global_norm isinstance name IndexedSlices histogram info append values config_and_print_log makedirs hd_data set_verbosity INFO train_dir hasattr default_image_size pad expand_dims reduce_mean squeeze fc_layer get_biases get_weights original_name_scope constant Variable moving_average_cov_mat multiply print add_to_collection constant abs subtract multiply matmul add reduce_sum div sqrt assign trace moments constant abs subtract multiply matmul add reduce_sum div sqrt reduce_mean trace add_to_collection eye moments convert_collection_to_dict stop_gradient sorted format constant proj_2Din4_bn_relu print multiply concat add reduce_mean findall expand_dims keys append squeeze get_variable greater_equal slice to_int32 logical_and with_dependencies Assert shape stack rank equal greater_equal reshape logical_and with_dependencies extend Assert shape rank random_uniform append range equal len append _crop append _crop append _crop append _crop append _crop range split convert_to_tensor to_float to_int32 greater cond convert_to_tensor resize_bilinear squeeze shape set_shape _smallest_size_at_least expand_dims squeeze resize_bilinear shape set_shape expand_dims map_fn to_float subtract random_flip_left_right map_fn to_float random_flip_left_right subtract multiply divide to_float random_flip_left_right subtract multiply divide rand_erase_images crop_size subtract mul minimum sqrt cast int32 random_uniform less cond slice concat random_uniform map_fn to_float subtract map_fn to_float subtract multiply divide to_float _mean_image_subtraction append resize_bilinear | # Robust Person Re-identification by Modelling Feature Uncertainty This repo contains the reference source code for the paper [Robust Person Re-identification by Modelling Feature Uncertainty](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yu_Robust_Person_Re-Identification_by_Modelling_Feature_Uncertainty_ICCV_2019_paper.pdf) in ICCV 2019. In this project, we provide Resnet-Baseline and DistributionNet architectures and results with 10% random noise on Market dataset. ## Enviroment - Python 2.7 - Tensorflow 1.3.0 ## Getting started Download dataset from [here](https://drive.google.com/drive/folders/1VUpNKRjaxOh3A_sbgsWdKuhq7BOHOOC9?usp=sharing) Folder 'Market' includes original Market dataset (training, query, and gallery) and 10% random noise Market training dataset file. Folder 'result' includes trained models of Resnet-Baseline and DistributionNet Folder 'pretrained_model' includes Resnet-50 pretrained in ImageNet | 1,059 |
TideDancer/DTWNet | ['dynamic time warping'] | ['DTWNet: a Dynamic Time Warping Network'] | code/main.py code/extension-ffi/package/test/test.py code/extension-ffi/script/functions/dynamic_prog.py code/Model.py code/configs/config_decomposition.py code/extension-ffi/package/my_package/functions/add.py code/configs/config_sample.py code/_ext/my_lib/__init__.py code/extension-ffi/script/functions/add.py code/extension-ffi/script/test_dp.py code/extension-ffi/script/_ext/dynamic_prog_lib/__init__.py code/extension-ffi/package/my_package/modules/add.py code/extension-ffi/script/test.py code/Util.py code/configs/config_sliding.py code/extension-ffi/script/_ext/my_lib/__init__.py code/extension-ffi/script/build.py code/extension-ffi/script/modules/add.py code/extension-ffi/script/build_dp.py code/configs/config_cnn.py code/extension-ffi/package/setup.py code/test_barycenter.py code/extension-ffi/script/modules/dynamic_prog.py code/extension-ffi/package/build.py code/_ext/dynamic_prog_lib/__init__.py DTWNET DTW_SPRING_ROW DTWNET_BOOSTING DTW_SPRING_EPS DTWNET_SINGLE DTW_BOOSTING DTW_FULL_DTWCO DTWLAYER_SHORTKERNEL DTWNET_MSE DTW_FULL MLP DTWLAYER_BOOSTING DTWLAYER DTW_SPRING_NPATH CONV_MLP DTWNET_BASE pythonDTW_FULL_ROW DTW pythonDTW_SPRING_ROW load_test load_train loss_function_mse loss_function_mse_flat loss_function_crossentropy test train train_hybrid_switch read_ucr_downsample loss_function_crossentropy test_hybrid_switch loss_function_hybrid load_boosting_mse loss_function_mse UCRDataset load_train loss_function_mse_flat test loss_function_crossentropy_shape loss_function_crossentropy_diverse auc loss_function_multi read_ucr proc_data load_test test_hybrid accuracy train_hybrid convert_Y loss_function train MyAddFunction MyAddModule MyNetwork MyNetwork MyNetwork read_ucr MyAddFunction DynamicProgFunction MyAddModule DynamicProgModule _import_symbols _import_symbols _import_symbols _import_symbols MSELoss zeros shape CrossEntropyLoss time format line model Variable backward dataset zero_grad loss_func print zeros float step enumerate len print eval get line get int list proc_data print read_ucr_downsample get int list proc_data read_ucr_downsample MSELoss range view MSELoss l dot kernel abs CrossEntropyLoss CrossEntropyLoss l loss_function_mse_flat loss_function_crossentropy loadtxt list asarray reshape loadtxt asarray reshape list index copy set range len LongTensor convert_Y DataLoader TensorDataset double type list view read_ucr_downsample DataLoader TensorDataset double LongTensor FloatTensor type time format line LongTensor model backward print train dataset zero_grad type float step loss_function_hybrid enumerate len print eval get line time format line LongTensor model loss_function_mse_flat backward train print zero_grad loss_function_crossentropy dataset type float step enumerate len print eval get line dir _wrap_function getattr append callable | # DTWNet DTWNet: a Dynamic Time Warping Network (accepted in NeurIPS 2019) | 1,060 |
TingAnChien/san-vqa-tensorflow | ['visual question answering'] | ['Stacked Attention Networks for Image Question Answering'] | data/vqa_preprocessing.py data/vqa_preprocessing_v2.py parall_co-atten/model_wqv.py s2i.py demo/demo_att.py model_VQA_w2v.py prepro_img.py demo/demo_att_w2v.py san_lstm_att.py parall_co-atten/attention.py prepro_w2v.py san_cnn_att.py prepro.py demo/demo.py get_data_test test right_align get_data Answer_Generator train prepro_question encode_mc_answer encode_question filter_question get_unqiue_img main apply_vocab_question tokenize build_vocab_question get_top_answers encode_answer extract_feat prepro_question encode_mc_answer encode_question filter_question encode_answer get_unqiue_img main apply_vocab_question tokenize build_vocab_question get_top_answers extract_feat get_data_test test right_align get_data Answer_Generator train get_data_test test right_align get_data Answer_Generator train download_vqa main download_vqa main extract_imfeat test Answer_Generator prepro_text extract_imfeat prepro_text test Answer_Generator attention extract_imfeat prepro_text vis_attention test Answer_Generator extract_feat attention get_data_test test right_align get_data Answer_Generator train zeros range shape list print multiply transpose divide right_align sqrt tile sum keys list print multiply transpose divide right_align sqrt tile sum keys trainable_variables get_data assign Saver save GPUOptions Session run apply_gradients initialize_all_variables range build_model compute_gradients join time print Variable random_integers AdamOptimizer Answer_Generator trainable_variables argmax GPUOptions Session open run list append initialize_all_variables range dump concatenate time get_data_test print build_generator Answer_Generator zeros len word_tokenize print write lower tokenize flush enumerate get join sorted print map append sum values get join sorted print map append range zeros min enumerate len zeros enumerate len get zeros enumerate len append print enumerate get zeros enumerate len load seed dump prepro_question print filter_question encode_question shuffle len File close apply_vocab_question encode_answer get_unqiue_img create_dataset get_top_answers build_vocab_question open join list format encode zip imresize transpose copy next Bar create_dataset append zeros imread forward finish range len array extract_feat str list eval keys global_variables_initializer len str keys download_vqa append range system word_tokenize print min lower zeros range len sum imresize multiply set_device transpose divide copy set_mode_gpu Net sqrt tile imread forward array TEST load join remove extract_imfeat format prepro_text isfile sleep reshape min repeat tile resize expand_dims max gaussian_filter imwrite astype uint8 float32 attention array extract_feat reshape min repeat tile resize expand_dims max gaussian_filter vis_attention InteractiveSession | # Tensorflow Implementation of Stacked Attention Networks for Image Question Answering  Provide tensorflow edition for [SAN](https://arxiv.org/pdf/1511.02274.pdf), stacked attention network for image question answering model. The LSTM and CNN based question models are provided, and they both using two attention layers. This code is modified from a tensorflow edition for deeper LSTM and normalized CNN VQA ([VQA-tensorflow](https://github.com/JamesChuanggg/VQA-tensorflow)). ### Requirements The code is written in Python and requires [Tensorflow](https://www.tensorflow.org)(>r1.0). The preprocssinng code is in Python.</br> (I also provide an old version(r0.10) for tensorflow model in branch r0.10) ### Prepare Data (modified from [VQA-tensorflow](https://github.com/JamesChuanggg/VQA-tensorflow)) (Some texts are copied from the original readme.md) The first thing you need to do is to download the data and do some preprocessing. Head over to the `data/` folder and run | 1,061 |
Tju-AI/two-stage-labeling-for-the-sentiment-orientations | ['sentiment analysis'] | ['$ρ$-hot Lexicon Embedding-based Two-level LSTM for Sentiment Analysis'] | two_level/evaluate_result.py two_level/embed_attention.py two_level/model.py data_prepare rate_result range len data_prepare len | p-hot Lexicon Embedding-based Two-level LSTM for Sentiment Analysis 1. Description of the algorithm Real sentences or paragraphs usually contain mixed sentiment orientations. Our algorithm is based a two-stage labeling for the sentiment orientations. A two-level network is accordingly proposed to utilize the two labeled data in the two-stage labeling strategy. Lexical cues (e.g., polar words, POS, conjunction words) are also used in our two-level network with a new encoding strategy. Please refer to our paper at arXiv for details: p-hot Lexicon Embedding-based Two-level LSTM for Sentiment Analysis.(https://arxiv.org/pdf/1803.07771.pdf) 2. Environment: python 3.5 or higher, keras (2.1.3) 3. The Baidu AI API is used for Chinese word segmentation. For the five types of lexicon words and conjunction words used in the algorithm, please contact the authors of the algorithms (Email: [email protected]). 4. Please run the embed_attention.py at first and then the model.py. | 1,062 |
TobeyYang/S2S_Temp | ['response generation', 'text generation'] | ['SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient', 'Learning Neural Templates for Text Generation', 'Low-Resource Response Generation with Template Prior'] | discriminator.py infc.py gan.py generate.py generator.py template_extraction.py textcnn.py datasets.py utils.py chsmm_without_src.py gen_from_src label_data save_model gen_from_srctbl HSMM test make_targs train test_batch MaskDictionary TemplateCorpus MatchingCorpus Dictionary Corpus Discriminator validation train_discriminator_matching train_generator_PG train_generator_MLE train_discriminator _beam_search Hypothesis sort_hyps beam_search Generator just_bwd bwd_from_fwd_obs_logprobs viterbi recover_bps _just_bwd just_fwd remap_eos_states just_state2phrases extract_from_tagged_data align_cntr group_by_template topk_phrases TextCNN logsumexp0 make_bwd_constr_idxs logsumexp1 beam_search2 logsumexp2 backtrace vlogsumexp make_fwd_constr_idxs batchwise_sample backtrace3 copy_ size range fill_ save data save_model arange add_ zero_grad save cuda no_constr clip L randperm ngen_types sum range format size clip_grad_norm net backward print t parameters make_targs step len valid print size t eval test_batch range len idx2word word2idx trans_logprobs topk join len_logprobs print ar gen_one unsqueeze verbose split gen_one_ar cpu float sum cuda append len gen_from_srctbl tagged_fi print eval ntemplates extract_from_tagged_data enumerate eval validation sum save_model backward print write pretrained_gen_path randperm batch_nll_loss item append train step range len beam_size list sample_one batchClassify backward beam_sample zero_grad shuffle batch_nll_loss batch_reward_loss N train step cuda range len join batchBCEloss validate valid save_model backward zero_grad write bsz gan_path batchwise_sample train step range len data validate valid save_model zero_grad pretrained_dis_path list range bsz batchBCEloss join backward print write MatchingCorpus tqdm train step len get_next_word_dist make_src_masks unsqueeze log topk start_emb expand encode append range size copy stack softmax zero_ zip contiguous extend sort_hyps len get_next_word_dist make_src_masks add_ unsqueeze topk view start_emb expand expand_as encode append range LongTensor size stack softmax zero_ log_ contiguous backtrace zeros len int size item append max range max sub_ fill_ size new min add_ squeeze expand copy_ transpose stack index_fill_ zero_ append long range len logsumexp0 arange fill_ view size new min transpose expand copy_ stack index_fill_ zero_ range logsumexp0 exp Variable log_softmax size min new expand logsumexp2 stack index_fill_ zero_ range logsumexp0 Variable size min new expand logsumexp2 stack index_fill_ zero_ range copy_ size range fill_ defaultdict tuple add set append max enumerate len items list defaultdict div_ Tensor sum enumerate list sorted just_state2phrases group_by_template keys list sorted zip items list float sum values exp view size unsqueeze expand_as sum max log exp view size unsqueeze expand_as sum max log exp expand_as sum max log max update min extend range len update min extend max range len append append add_ copy_ mlpinp K cuda is_cuda topk view fill_ L expand_as append range cat one_rnn LongTensor word2idx size genset zero_ idx2word log_ Variable backtrace collapse_word_probs inpmlp out_features cpu zeros narrow list sample_one size shuffle pad cuda append max range cat len | # S2S-Temp This repository contains the data and code for [Low-Resource Response Generation with Template Prior (EMNLP2019)](https://arxiv.org/abs/1909.11968). The overall architecture is shown in Figure 1.  ## Environments The code has been tested with: * python 3.6 * pytorch 0.4.1 * Ubuntu 16.04 ## Data and Preparation **Question Response Generation Task** The preprocessed data is in [question_data](https://github.com/TobeyYang/S2S_Temp/tree/master/question_data). [question_data/dia_data](https://github.com/TobeyYang/S2S_Temp/tree/master/question_data/dia_data) contains the pair data where the raw data is downloaded from [here](http://coai.cs.tsinghua.edu.cn/file/QGdata.zip). The large scale unpaired question data is under [question_data/zhihu_data](https://github.com/TobeyYang/S2S_Temp/tree/master/question_data/zhihu_data). | 1,063 |
TobeyYang/StyleDGPT | ['response generation'] | ['StyleDGPT: Stylized Response Generation with Pre-trained Language Models'] | style_dialogpt/utils.py style_discriminator/eval_style_dis.py download_resources.py style_dialogpt/data_loader.py style_dialogpt/metrics/rouge/rouge.py style_dialogpt/metrics/distinct.py style_dialogpt/metrics/bleu/bleu.py style_lm/train_style_lm.py models/optim.py style_dialogpt/losses.py style_dialogpt/evaluate.py models/modeling_discriminator.py style_dialogpt/train_gumbel_kl.py models/__init__.py style_dialogpt/metrics/bleu/bleu_scorer.py style_dialogpt/env.py style_discriminator/build_data.py style_dialogpt/metrics/rouge/__init__.py models/modeling_gpt2.py style_dialogpt/metrics/compute_metrics.py style_discriminator/train_style_dis.py download_resource download_file unpack main download ClassificationHead Discriminator GPT2LMHeadModel noamwd_decay warmup_constant Adamax Adam rsqrt_decay warmup_linear noam_decay exponential_decay warmup_cosine RedditMultiRefExample SequentialDataLoader BucketBatchSampler BucketDataLoader GPT2Dataset RedditExample evaluate generate_responses evaluate_from_json compute_kl cal_entropy compute_metrics distinct Bleu my_lcs Rouge filter clf_eval interact train_discriminator cached_collate_fn train_epoch get_cached_data_loader collate_fn Dataset predict evaluate_performance convert_pad set_seed evaluate LineByLineTextDataset train _sorted_checkpoints main _rotate_checkpoints load_and_cache_examples read print close write GzipFile open join remove print unpack mkdir download exists split print join download exists get format isinstance print download_resource download_file enumerate items list format print add_argument resource ArgumentParser output_dir parse_args download cpu size view generate items list sorted BucketBatchSampler zip reduce mean GPT2Dataset DataLoader eval compute_metrics info context_lens to len sum tiny softmax detach join list split sum range values len update min tolist eval_emb_metrics OrderedDict compute_score distinct zip cal_entropy enumerate update len split sum values ngrams max range len list print tqdm sum range len print zip append clf_eval split format print eval item input tensor cuda pad_sequences tensor list keys tensor list keys cat format backward print nll_loss zero_grad train_custom discriminator item dataset step enumerate len format print eval dataset len avg_representation print tolist encode tensor DataLoader tqdm enumerate get_cached_data_loader DataLoader save Adam to range predict state_dict format evaluate_performance int time random_split join print train_epoch parameters Dataset len seed manual_seed_all manual_seed join sorted format glob match output_dir append format save_total_limit rmtree _sorted_checkpoints info max len long gradient_accumulation_steps resize_token_embeddings model get_linear_schedule_with_warmup clip_grad_norm_ zero_grad DataLoader DataParallel DistributedDataParallel max_grad_norm output_dir device save max initialize list hasattr set_seed logging_steps load_state_dict master_params to logdir state_dict SummaryWriter format _rotate_checkpoints close mean save_pretrained num_train_epochs info fp16 trange per_gpu_train_batch_size max_steps enumerate load join int n_gpu items evaluate model_name_or_path AdamW backward add_scalar makedirs named_parameters tqdm parameters step train_batch_size len DataParallel output_dir device tensor max eval_batch_size exp per_gpu_eval_batch_size SequentialSampler format join n_gpu makedirs tqdm load_and_cache_examples enable_attach from_pretrained config_name should_continue block_size warning device do_train save eval_all_checkpoints setLevel basicConfig model_class set_seed set_device _sorted_checkpoints to WARN config_class update init_process_group tokenizer_name save_pretrained info fp16 wait_for_attach train load join n_gpu evaluate model_name_or_path min barrier max_len dict bool load_and_cache_examples local_rank makedirs | # StyleDGPT This repo contains the code of the paper: [STYLEDGPT: Stylized Response Generation with Pre-trained Language Models](https://arxiv.org/abs/2010.02569), Findings of EMNLP2020. ## Requirments This code is tested on Python 3.6, Pytorch 1.3.0, and transformers 2.5.1. ``` pip install -r requirments.txt ``` ## Resources ### 1. Data First, you need to prepare the data following the pipeline in this prior [work](https://github.com/golsun/StyleFusion). | 1,064 |
Tom-Achache/QAEs | ['denoising'] | ['Denoising quantum states with Quantum Autoencoders -- Theory and Applications'] | train.py State_preparation.py test.py Stack_QAE.py Noisy_QSS.py Noise_robustness.py Cool_plots.py Regular_QSS.py QNN.py get_statevectors plot_SC plot_SH noise_robustness plot_noisy_VS_denoised_QSS noisy_QSS_protocol theoretic_fail_proba p_diff noise_strength_VS_QSS QDC noisy_QSS_circuit QNN tensor_Paulis create_Pauli QSS_protocol QSS_circuit stacked_QAE_fidelity_range stacked_QAE_fidelity state_preparation theoretical_fid plot_fid_VS_noise_strength fidelity compute_robustness plot_fid get_backend append zeros subroutine_2 range QuantumCircuit W add_subplot axis title plot_state_city set_zlim figure savefig add_subplot axis savefig figure plot_state_hinton normal errorbar fidelity xlabel grid ylabel test set_K mean axhline savefig legend append std len y arange z choice range QuantumCircuit x to_instruction list rand h barrier measure cx sdg append subroutine_2 range QuantumCircuit W int result get_counts noisy_QSS_circuit iter next range append noisy_QSS_protocol p_diff plot xlabel grid ylabel set_visible savefig legend array eye create_Pauli rand barrier h measure cx sdg range QuantumCircuit int format print result range get_counts iter next QSS_circuit len list get_statevector set partial_trace reset state_fidelity get_backend append subroutine_2 range QuantumCircuit W grid get_statevector subroutine_2 W list ylabel savefig append range errorbar set partial_trace mean get_backend state_fidelity xlabel reset std QuantumCircuit y arange rand h choice z cx range QuantumCircuit x append state_fidelity get_statevector get_backend fidelity test mean append std stacked_QAE_fidelity compute_robustness errorbar plot xlabel grid ylabel savefig legend theoretical_fid plot xlabel grid ylabel ylim set_visible savefig legend xlim len | # QAEs Denoising quantum states with Quantum Autoencoders. See the article here https://arxiv.org/abs/2012.14714. | 1,065 |
Tom-Zheng/depth_single_image | ['monocular depth estimation'] | ['Depth Map Prediction from a Single Image using a Multi-Scale Deep Network'] | Test.py model_part.py train_operation.py convert_mat_to_img.py dataset.py Depth.py task.py model.py convert_nyu output_predict_single output_predict DataSet main train _add_loss_summaries inference inference_refine loss _variable_on_gpu conv2d _variable_with_weight_decay fc main test train _add_loss_summaries fromarray join uint8 print transpose File shuffle save zip append max enumerate fromarray uint8 transpose MakeDirs save zip max enumerate fromarray uint8 transpose MakeDirs save zip max enumerate global_variables_initializer DataSet Variable print inference_refine float32 placeholder inference csv_inputs reset_default_graph loss train MakeDirs conv2d reshape fc max_pool conv2d concat dropout max_pool subtract reshape multiply square reduce_sum pow reduce_mean add_to_collection name get_collection apply average ExponentialMovingAverage scalar_summary multiply add_to_collection _variable_on_gpu l2_loss truncated_normal_initializer get_variable resize_images global_variables_initializer Variable reshape inference_refine float32 placeholder read_file string cast inference reset_default_graph decode_jpeg test scalar int trainable_variables name _add_loss_summaries apply apply_gradients histogram ExponentialMovingAverage exponential_decay float scalar | # cnn_depth_tensorflow cnn_depth_tensorflow is an implementation of depth estimation using tensorflow. Presentation slides of this project: https://docs.google.com/presentation/d/1ytuDdWJUG9VTK7wCVm3BzbW1uuGI_sRgH0TCZc1PNe0/edit?usp=sharing ## Reference Original paper is "Depth Map Prediction from a Single Image using a Multi-Scale Deep Network". https://arxiv.org/abs/1406.2283 Modified from: https://github.com/MasazI/cnn_depth_tensorflow ## Requierments | 1,066 |
Tom556/OrthogonalTransformerProbing | ['word embeddings'] | ['Introducing Orthogonal Constraint in Structural Probes'] | src/reporting/reporter.py src/network.py src/legacy/derivation.py tests/test_math.py src/data_support/tfrecord_wrapper.py src/reporting/metrics.py src/save_tfrecord.py src/constants.py src/report.py src/legacy/coreference.py src/data_support/random.py src/data_support/positional.py src/data_support/shuffled.py src/data_support/conll_wrapper.py src/data_support/dependency.py src/probe.py src/data_support/lexical.py prim_mst Network ConllWrapper DependencyDepth DependencyDistance LexicalDepth LexicalDistance PositionalDepth PositionalDistance RandomDistance RandomDepth ShuffledDepth ShuffledDistance TFRecordWriter TFRecordReader TFRecordWrapper merge_dict CoreferenceDistance DerivationDistance DerivationDepth Derivation RootAcc Metric Spearman UAS UASReporter SelectedDimensionalityReporter Reporter CorrelationReporter DependencyDepthReporter test_loss_depth test_orthogonal_contraint network test_loss_distance range minKey MAX_WORDPIECES get items SimpleNamespace assert_none_equal Variable ones assert_positive assert_near ortho_reguralization fill assert_near constant _loss fill assert_near constant _loss | # Orthogonal Probing The repository contains code for [Introducing Orthogonal Constraint in Structural Probes](https://arxiv.org/abs/2012.15228) ## Data preparation Before performing probing, save language data and Transformer embeddings in TFRecord format. This is necessary clear out lengthy Transformer inference during probe training. The code allows converting the [ConLL-U](https://universaldependencies.org/format.html) language data into [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord) files ready for probing. Firstly, save your train, development, and test sets in `resources/entrain.conllu`, `resources/endev.conllu`, and `resources/entest.conllu`, respectively. Secondly, remember to install required python dependencies (Python 3.8 recommended). ``` pip install -r requirements.txt | 1,067 |
Tony607/ALOCC_Keras | ['outlier detection', 'one class classifier', 'anomaly detection'] | ['Adversarially Learned One-Class Classifier for Novelty Detection'] | utils.py ops.py kh_tools.py models.py read_lst_images kh_extractPatches get_image_patches kh_crop get_noisy_data kh_getSliceImages get_patch_video kh_getImages kh_isDirExist read_lst_images_w_noise2 read_image_w_noise kh_extractPatchesOne read_dataset_image_path kh_getSliceImages_simple read_lst_images_w_noise read_dataset_images read_image ALOCC_Model deconv2d get_image make_gif to_json transform_Slicization save_images transform visualize montage center_crop merge get_image_SlicizationWithShape merge_images kh_make_patches conv_out_size_same inverse_transform imread imsave get_image_Slicization append random_noise append join glob random_noise read_image append read_image_w_noise extend get_image_patches read_image_w_noise append extend get_image_patches read_image append join read_image glob read_lst_images get_image_patches format print extend range len append shape array print makedirs basename std kh_crop print min add mean kh_isDirExist dirname resize append zeros range array imsave open basename kh_crop print min kh_isDirExist dirname resize append zeros range array imsave open kh_extractPatchesOne append kh_extractPatches imread transform_Slicization astype float32 transform_Slicization astype float32 zeros enumerate squeeze merge int round center_crop imresize VideoClip write_gif make_gif int arange save_images batch_size print sampler strftime choice uniform gmtime run xrange ceil zeros tile append enumerate list as_strided Number isinstance tuple ndim strides shape array int isinstance imsave ones sqrt ceil array range | # [How to do Novelty Detection in Keras with Generative Adversarial Network](https://www.dlology.com/blog/how-to-do-novelty-detection-in-keras-with-generative-adversarial-network/) Keras implementation of paper https://arxiv.org/abs/1802.09088 Adversarially Learned One-Class Classifier or ALOCC for short. * [Tutorial part 1](https://www.dlology.com/blog/how-to-do-novelty-detection-in-keras-with-generative-adversarial-network/) * [Tutorial part 2](https://www.dlology.com/blog/how-to-do-novelty-detection-in-keras-with-generative-adversarial-network-part-2/) ## How to Run Require [Python 3.5+](https://www.python.org/ftp/python/3.6.4/python-3.6.4.exe) and [Jupyter notebook](https://jupyter.readthedocs.io/en/latest/install.html) installed ### Clone or download this repo ``` git clone https://github.com/Tony607/Keras_TimeseriesGenerator | 1,068 |
TonyTangYu/ADMMiRNN | ['text classification'] | ['P-ADMMiRNN: Training RNN with Stable Convergence via An Efficient and Paralleled ADMM Approach'] | min_char_rnn.py rnn_text.py common.py lagrange phi_v phi_o der_tanh update_lambda2 phi_c update_b update_w phi_a cross_entropy update_v relu phi_u phi update_s update_o update_a softmax update_u update_lambda1 cross_entropy_with_softmax tanh phi_s update_lambda3 update_c phi_w phi_b sample lossFun sample lossFun inference softmax cross_entropy tanh sum matmul T zeros_like dot zeros range len T zeros_like dot zeros range len zeros_like dot zeros range len tanh dot zeros range len tanh T copy dot zeros range len T zeros_like dot range len dot range zeros_like len dot range len phi_u phi_w phi_b phi_a range len phi_s range len phi_v phi_c phi_o range len dot zeros len tanh len dot len tanh T exp clip zeros_like copy reversed dot zeros sum range len tanh exp choice dot append zeros sum range tanh exp copy dot zeros sum range len | # ADMMiRNN ADMMiRNN: Training RNN with Stable Convergence via An Efficient ADMM Approach | 1,069 |
TovlyDeutsch/Linguistic-Features-for-Readability | ['text classification'] | ['Linguistic Features for Readability Assessment'] | features/parser_uncertainty.py han/src/dataset.py features/flesch.py svm.py WeeBit.py magpie/config.py magpie/main.py magpie/base/word2vec.py magpie/base/document.py magpie/utils.py features/feature.py runner.py keras_model_generation.py bash_scripts/csv_stitcher.py Newsela.py han/src/hierarchical_att_model.py han/src/sent_att_model.py features/neural.py features/word_types.py han/src/utils.py common.py magpie/nn/input_data.py features/dummy.py bash_scripts/csv_gen.py bash_scripts/exp_runner.py features/NBTfidf.py bert/run_classifier.py han/src/word_att_model.py han/train.py features/pos_div.py dataSplitting.py features/java_importer.py magpie/nn/models.py bert/extract_weebit.py corpus_tsv.py generisize_func mkdir_run paramsToString getCorpusJson groups_to_list mkdir getCorpusJsonByName feature_fill pluck addToFileDict train_test_split feature_fill_non_tuples fileDictToTupleList writeExperiment genMagpie genAndTrainMagpie getGroups aquire_locked_file Config safe_load get_feature_class Runner gen_runner parse_config safe_dump main gen_config log train getGroups get_BitGCSE get_WRLevel get_BitKS3 create_avg_csv lookupShortName avg_metric get_pickle_groups std_metric run_exp get_BitGCSE get_WRLevel get_BitKS3 OneStopEnglishProcessor InputFeatures accuracy UcbenikiProcessor InputExample _truncate_seq_pair WeebitProcessor convert_examples_to_features NewselaProcessor warmup_linear main DataProcessor Dummy FeatureFullExample Feature wordCount get_syllable_count text_statistics flesch_formula get_word_count Flesch sent_count syllable_count get_sent_count WordCount SyllableCount FleschKincaid syllablesPerWord fk_formula not_punctuation avgSenLength numsyllables SentenceCount numsyllables_pronlist JavaImporter NBTfidfVectorizer Neural WordTypeProportions WordTypes main train get_args MyDataset generate_vocabulary HierAttNet SentAttNet LayerNorm element_wise_mul get_evaluation get_max_lengths matrix_mul WordAttNet set_tf_growth Magpie calculate_label_distribution get_all_answers save_to_disk load_from_disk calculate_number_of_labels_distribution get_answers_for_doc Document train_word2vec train_word2vec_in_memory compute_word2vec_for_phrase fit_scaler build_x_and_y get_data_for_model lstm bilstm root_mean_squared_error cnn get_nn_model rnn open open join makedirs range makedirs list print concat reduce values items append items append update isinstance copy set append keys range len update isinstance print copy append range len feature_fill print tolist min log floor addToFileDict sample DataFrame fileDictToTupleList len print save_scaler mkdir Magpie fit_scaler save_model TensorBoard strftime genMagpie load_word2vec_format train join read defaultdict reader print append enumerate open LOCK_EX flock LOCK_NB open aquire_locked_file load close LOCK_UN flock aquire_locked_file LOCK_EX dump isfile close LOCK_UN flock open eval print parse_config run_cross_eval Runner run load read verbose parse_args add_argument ArgumentParser SVC LabelEncoder vstack extractFeatureNames coef_ LogisticRegression shape classification_report mean sqrt f1_score GridSearchCV print NBTfidfVectorizer LinearSVC transform mean_squared_error MinMaxScaler array fit join str punctuation print set walk join str punctuation print set walk join str punctuation print set walk get_BitGCSE get_WRLevel get_BitKS3 load join defaultdict endswith append walk open append print append print writer load list read items print writerow filter avg_metric isfile append sum range open load join read endswith print system walk join text_b print InputFeatures convert_tokens_to_ids _truncate_seq_pair tokenize guid info append label text_a enumerate len pop len argmax BertAdam ArgumentParser save seed device_count convert_examples_to_features parse_args manual_seed_all param_groups recall_score get_world_size mean info manual_seed f1_score enumerate join learning_rate makedirs confusion_matrix empty_cache step zero_grad DataParallel DataLoader output_dir do_train eval_batch_size list DDP max_seq_length DistributedSampler tolist get_labels FusedAdam precision_score append to KFold init_process_group sqrt lower eval trange add_argument accuracy tqdm get_dev_examples mean_squared_error train bool gradient_accumulation_steps from_pretrained get_train_examples model tuple FP16_Optimizer set_device half shape warmup_linear SequentialSampler state_dict fp16 load print extend named_parameters numpy local_rank read_csv train_batch_size split device tensor accuracy_score TensorDataset format num_train_epochs int warmup_proportion bert_model backward RandomSampler len word_tokenize get_word_count map sum get_sent_count text_statistics text_statistics text_statistics text_statistics text_statistics parse_args add_argument ArgumentParser vocab_path get_evaluation batch_size model clip_grad_norm_ zero_grad DataLoader numpy save sep argmax cuda exists accuracy_score open num_classes train_set HierAttNet len Adam get_max_lengths append saved_path sum CrossEntropyLoss cat KFold word_hidden_size range format _init_hidden_state num_epoches sent_hidden_size eval manual_seed is_available vars enumerate load int remove criterion backward MyDataset write extend __len__ confusion_matrix parameters cpu step read_csv split get_args join items defaultdict print makedirs dirname append len str recall_score confusion_matrix precision_score log_loss f1_score accuracy_score argmax Parameter isinstance squeeze unsqueeze repeat append mm expand_as unsqueeze zip append cat word_tokenize iterrows sorted print sent_tokenize lower append read_csv len dump dirname open dict get_answers_for_doc join append items defaultdict Counter values reduce map Word2Vec init_sims zeros vector_size split load format isinstance print save_to_disk partial_fit next iter append get_all_words StandardScaler array range SentenceIterator init_sims Word2Vec dict build_x_and_y get reshape Document zeros float enumerate print Model Input compile print Model Input compile clear_session print Model append Input compile print Model Input compile | TovlyDeutsch/Linguistic-Features-for-Readability | 1,070 |
TransEmbedBA/TREMBA | ['adversarial attack'] | ['Black-Box Adversarial Attack with Transferable Model-based Embedding'] | Normalize.py imagenet_model/Resnet.py train_generator.py DataLoader.py attack.py FCN.py utils.py EmbedBA imagenet Imagenet_Decoder Imagenet_Encoder Permute Normalize train test MarginLoss_Single Function MarginLoss conv1x1 ResNet Bottleneck conv3x3 resnet101_denoise resnet152_denoise DenoiseBottleneck resnet152 BasicBlock Denoise function zeros_like device max view squeeze transpose normal_ expand_as append range format mean item float empty norm decoder print clamp clone repeat bool len ImageFolder DataLoader Compose array format zip model backward clamp print zero_grad hingeloss dict item append to step net enumerate model clamp hingeloss eval to range enumerate len ResNet ResNet ResNet | # Black-Box Adversarial Attack with Transferable Model-based Embedding This repository contains the code for reproducing the experimental results of attacking Imagenet dataset, of our submission: *Black-Box Adversarial Attack with Transferable Model-based Embedding*. https://openreview.net/forum?id=SJxhNTNYwB ## Requirements Python packages: numpy, pytorch, torchvision. The code is tested under Ubuntu 18.04, Python 3.7.1, PyTorch 1.1.0, NumPy 1.16.4, torchvision 0.3.0, CUDA 10.0 and cuDNN 7.4.2. Please download the weight of the generator from https://drive.google.com/file/d/1IvqcYTnIjqPK7oZU-UnVzjfdxdtV63jk/view?usp=sharing and extract it in the root folder; Please download the test images from https://drive.google.com/file/d/1Gs_Rw-BDwuEn5FcWigYP5ZM9StCufZdP/view?usp=sharing and extract it under [dataset/Imagenet](dataset/Imagenet) ## Reproducing the results ### Imagenet targeted attack: | 1,071 |
Triple-L/Wartegg_test | ['gaussian processes'] | ['Neural Tangent Kernel: Convergence and Generalization in Neural Networks'] | Eve/hoffman_line.py Eve/black_border.py Eve/0.5_thinning.py Eve/1_line_works.py Eve/main_lines.py Process steps.py Test/T_Hough_for_lines.py Test/T_python_fitline_最小二乘.py Eve/0_Blck_whit_fat.py Eve/jimmy_ROI.py Test/test_code.py stairght_line/straight_line.py stairght_line/T_绘制轮廓.py Eve/2_grid.py Z-S thinning.py Eve/line_fit.py main.py tools_utils.py Eve/main.py stairght_line/T_泛洪填充.py Test/T_OpenCV直线拟合检测.py my_utils.py Hough_curves.py load_img check_file Two Thin canny_demo cut_image save_images change_size canny_demo LineDetector fill_binary LineDetector Least_squares print exists append imread listdir shape range shape range Canny imwrite int print size append range str save print min shape append imread max range uint8 ones floodFill imshow FLOODFILL_MASK_ONLY zeros mean zeros arange square | # Wartegg_test For wartegg test 1# start with survey the research status of Wartegg test. **Reading Material gathered:** A Neural Representation of Sketch Drawings https://arxiv.org/pdf/1704.03477.pdf Google Blog Google AI Teaching Machines to Draw https://ai.googleblog.com/2017/04/teaching-machines-to-draw.html Sketch-based Object Recognition | 1,072 |
TrizteX/MIC_proj1 | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | inf.py neural_style.py | TrizteX/MIC_proj1 | 1,073 |
TropComplique/FaceBoxes-tensorflow | ['face detection'] | ['FaceBoxes: A CPU Real-time Face Detector with High Accuracy'] | src/input_pipeline/pipeline.py create_pb.py src/network.py src/losses_and_ohem.py src/input_pipeline/random_image_crop.py train.py src/__init__.py src/input_pipeline/__init__.py src/utils/__init__.py src/training_target_creation.py src/anchor_generator.py face_detector.py save.py src/utils/nms.py src/constants.py evaluation_utils.py src/input_pipeline/other_augmentations.py src/detector.py src/utils/box_utils.py model.py create_tfrecords.py main make_args dict_to_tf_example _bytes_feature make_args main _float_list_feature compute_ap compute_best_threshold Box Evaluator evaluate_detector compute_iou match FaceDetector model_fn add_weight_decay serving_input_receiver_fn get_input_fn AnchorGenerator tile_anchors generate_anchors_at_upper_left_corner Detector classification_loss apply_hard_mining localization_loss _subsample_selection_to_desired_neg_pos_ratio preprocess FeatureExtractor inception_module _create_targets get_training_targets _match random_jitter_boxes random_pixel_value_scale random_flip_left_right random_color_manipulations Pipeline random_image_crop _prune_completely_outside_window _ioa _prune_non_overlapping_boxes _random_crop_image _change_coordinate_frame decode iou batch_decode to_minmax_coordinates area to_center_coordinates intersection encode batch_non_max_suppression add_argument ArgumentParser Graph ConfigProto join int BytesIO print Example append float open dict_to_tf_example annotations_dir make_args open image_dir SerializeToString ceil num_shards TFRecordWriter close mkdir listdir load join print write output tqdm rmtree len compute_ap compute_best_threshold sort match float max enumerate values len abs argmax asarray ymin min ymax xmax xmin max compute_iou enumerate zip AnchorGenerator get_total_loss Detector get_collection PredictOutput FeatureExtractor UPDATE_OPS histogram add_loss get_predictions loss scalar trainable_variables constant REGULARIZATION_LOSSES multiply float32 add_to_collection l2_loss placeholder listdir to_float reshape sqrt stack pad generate_anchors_at_upper_left_corner tile meshgrid expand_dims range to_float reshape stack tile expand_dims range less abs sparse_softmax_cross_entropy_with_logits as_list len reduce_sum _subsample_selection_to_desired_neg_pos_ratio reduce_mean stack unstack set_shape append gather scalar enumerate non_max_suppression to_float greater_equal cumsum to_int32 squeeze less_equal size maximum reduce_sum logical_not where logical_or gather conv2d max_pool2d iou one_hot to_int32 multiply reduce_max greater where add argmax greater_equal zeros squeeze to_int32 where dynamic_stitch encode gather equal less cond random_uniform decode reshape clip_by_value tile expand_dims map_fn | # FaceBoxes-tensorflow This is an implementation of [FaceBoxes: A CPU Real-time Face Detector with High Accuracy](https://arxiv.org/abs/1708.05234). I provide full training code, data preparation scripts, and a pretrained model. The detector has speed **~7 ms/image** (image size is 1024x1024, video card is NVIDIA GeForce GTX 1080). ## How to use the pretrained model To use the pretrained face detector you will need to download `face_detector.py` and a frozen inference graph (`.pb` file, it is [here](https://drive.google.com/drive/folders/1DYdxvMXm6n6BsOy4dOTbN9h43F0CoUoK?usp=sharing)). You can see an example of usage in `try_detector.ipynb`. Examples of face detections:   | 1,074 |
TropComplique/mtcnn-pytorch | ['face detection', 'face alignment'] | ['Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks'] | src/__init__.py extract_weights_from_caffe_models.py src/visualization_utils.py src/box_utils.py src/first_stage.py src/detector.py src/get_nets.py get_all_weights calibrate_box nms correct_bboxes convert_to_square _preprocess get_image_boxes detect_faces _generate_bboxes run_first_stage RNet ONet Flatten PNet show_bboxes data join list transpose params minimum concatenate maximum delete argsort append len maximum zeros_like expand_dims hstack fromarray correct_bboxes asarray _preprocess size BILINEAR resize zeros range len expand_dims transpose vstack round nms FloatTensor append expand_dims PNet size eval get_image_boxes calibrate_box ONet run_first_stage convert_to_square Variable reshape min rnet onet RNet numpy nms asarray FloatTensor Variable _preprocess size _generate_bboxes BILINEAR resize numpy net vstack array where ellipse Draw copy rectangle range | # MTCNN `pytorch` implementation of **inference stage** of face detection algorithm described in [Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks](https://arxiv.org/abs/1604.02878). ## Example  ## How to use it Just download the repository and then do this ```python from src import detect_faces from PIL import Image | 1,075 |
TrustAI/GUAP | ['adversarial attack'] | ['Generalizing Universal Adversarial Attacks Beyond Additive Perturbations'] | attack_model.py models/mobilenetv2.py models/resnext.py models/senet.py run_imagenet.py models/googlenet.py models/wide_resnet.py models/__init__.py models/densenet.py models/lenet.py utils.py models/mobilenet.py models/dpn.py models/preact_resnet.py models/shufflenet.py models/vgg.py models/pnasnet.py run_cifar.py models/shufflenetv2.py models/efficientnet.py models/resnet.py Generator ResnetBlock flow_st Loss_flow cal_l2dist weights_init norm_ip DenseNet201 DenseNet161 DenseNet121 Transition DenseNet Bottleneck densenet_cifar test DenseNet169 DPN Bottleneck test DPN92 DPN26 drop_connect Block SE EfficientNet swish test EfficientNetB0 GoogLeNet Inception test LeNet Block MobileNet test Block test MobileNetV2 PNASNetB PNASNetA SepConv test PNASNet CellB CellA PreActBlock PreActResNet18 PreActResNet PreActBottleneck ResNet ResNet18 ResNet34 Bottleneck ResNet101 test ResNet50 BasicBlock ResNet152 Block ResNeXt29_4x64d ResNeXt ResNeXt29_2x64d test_resnext ResNeXt29_32x4d ResNeXt29_8x64d PreActBlock SENet18 SENet test BasicBlock ShuffleNetG2 ShuffleNet Bottleneck test ShuffleBlock ShuffleNetG3 SplitBlock test DownBlock ShuffleBlock ShuffleNetV2 BasicBlock VGG test BasicBlock NetworkBlock WideResNet normal_ __name__ fill_ clamp repeat long permute unsqueeze float cuda sum shape sqrt unsqueeze append numpy range min clamp_ div_ float max print net densenet_cifar randn DPN92 empty div_ mul_ bernoulli_ EfficientNetB0 shape size GoogLeNet MobileNet MobileNetV2 PNASNetB ResNet18 randn print size ResNeXt29_2x64d net SENet18 ShuffleNetG2 ShuffleNetV2 VGG | # GUAP: Generalizing Universal Adversarial Attacks Beyond Additive Perturbations Tool for generating spatial-transfermed or additive universarial perturbations, the paper '[Generalizing Universal Adversarial Attacks Beyond Additive Perturbations](https://arxiv.org/pdf/2010.07788.pdf)' was accepted by [ICDM 2020](http://icdm2020.bigke.org/). Please cite Yanghao Zhang, Wenjie Ruan, Fu Wang, and Xiaowei Huang, Generalizing Universal Adversarial Attacks Beyond Additive Perturbations, The IEEE International Conference on Data Mining (ICDM 2020), November 17-20, 2020, Sorrento, Italy  In this paper, for the first time we propose a unified and flexible framework, which can capture the distribution of the unknown additive and non-additive adversarial perturbations jointly for crafting Generalized Universal Adversarial Perturbations. Specifically, GUAP can generate either additive (i.e., l_inf-bounded) or non-additive (i.e., spatial transformation) perturbations, or a com- bination of both, which considerably generalizes the attacking capability of current universal attack methods. ## Running environment: python 3.6.10 pytorch 1.5.0 ## Colab demo: | 1,076 |
TrustworthyDL/LeBA | ['adversarial attack'] | ['Learning Black-Box Attackers with Transferable Priors and Query Feedback'] | imagenet/models/vgg.py data_utils.py imagenet/models/__init__.py imagenet/models/densenet.py imagenet/models/inception.py test_tool.py imagenet/models/resnet.py defense/jpeg_compress.py defense/defense.py get_result.py imagenet/get_model.py imagenet/__init__.py run_attack.py LeBA10.py ImageSet check_mkdir load_images_data Logger get_preprocess normalize get_model QueryModel gauss_conv get_data distance index_ adjust_learning_rate select_points get_trans_advimg normalize parse_args get_gauss_diff update_img sample_byprob get_diff_gauss normalizer update_slice attack_black run_attack_train before_query_iter TrainModelS gkern get_cmd run_cmd check_mkdir test_old test_new test_P_RGF _jpeg_compression jpeg_compression_batch padding_layer_iyswim get_defense gauss_filter _jpeg_compression get_model _bn_function_factory densenet161 _load_state_dict DenseNet densenet169 densenet201 _DenseLayer _DenseBlock _densenet _Transition densenet121 InceptionB InceptionC InceptionAux BasicConv2d InceptionD InceptionE InceptionA Inception3 inception_v3 conv1x1 resnext50_32x4d wide_resnet50_2 ResNet resnet50 resnext101_32x8d Bottleneck resnet152 wide_resnet101_2 conv3x3 _resnet resnet34 resnet18 BasicBlock resnet101 vgg19 VGG vgg16_bn _vgg vgg19_bn vgg11_bn vgg13 vgg11 make_layers vgg13_bn vgg16 makedirs DataLoader Compose ImageSet read_csv to device outer sum linspace pdf to astype float32 conv2d stack expand_dims float norm unsqueeze distance unsqueeze reshape clamp randn shape gkern zeros tensor range max to get_gauss_diff range topk sample_byprob reshape shape permute append long range data backward print gauss_conv reshape clone zero_grad max_distance query update_slice distance unsqueeze zero_ model2 preprocess2 proj range detach data backward gauss_conv reshape clone zero_grad max_distance unsqueeze zero_ model2 preprocess2 proj range param_groups append to next range query2 print clamp reshape clone query to attack_black append range reshape max_distance get_data distance ba_num query index_ save select_points ones get_trans_advimg shape iter to sum range cat state_dict normalizer get_diff_gauss update_slice model2 float int print clamp train_model_s clone before_query_iter proj epsilon len add_argument ArgumentParser FL_rate localtime strftime print sum mean len print sum mean len print sum mean len BytesIO save open to range device randint pad zeros range gaussian_filter shape list group match load_state_dict load_state_dict_from_url keys compile _load_state_dict DenseNet load_state_dict_from_url Inception3 load_state_dict ResNet load_state_dict load_state_dict_from_url Conv2d load_state_dict_from_url make_layers VGG load_state_dict | ## Learning Black-Box Attackers with Transferable Priors and Query Feedback
[Jiancheng Yang](https://jiancheng-yang.com/)\*, Yangzhou Jiang\*, [Xiaoyang Huang](http://scholar.google.com/citations?user=Svw7X6kAAAAJ&hl=en), [Bingbing Ni](https://scholar.google.com/citations?user=eUbmKwYAAAAJ&hl=zh-CN), [Chenglong Zhao](https://scholar.google.com/citations?user=wl55lFoAAAAJ&hl=zh-CN).
*Neural Information Processing Systems (NeurIPS), 2020* ([arXiv](https://arxiv.org/abs/2010.11742))
### Abstract
This paper addresses the challenging black-box adversarial attack problem, where only classification confidence of a victim model is available. Inspired by consistency of visual saliency between different vision models, a surrogate model is expected to improve the attack performance via transferability. By combining transferability-based and query-based black-box attack, we propose a surprisingly simple baseline approach (named SimBA++) using the surrogate model, which significantly outperforms several state-of-the-art methods. Moreover, to efficiently utilize the query feedback, we update the surrogate model in a novel learning scheme, named High-Order Gradient Approximation (HOGA). By constructing a high-order gradient computation graph, we update the surrogate model to approximate the victim model in both forward and backward pass. The SimBA++ and HOGA result in Learnable Black-Box Attack (LeBA), which surpasses previous state of the art by large margins: the proposed LeBA reduces 34%-78% queries, while keeping higher attack success rates close to 100% in extensive ImageNet experiments, including attacking vision benchmarks and defensive models.
### Implementation
| 1,077 |
Turmac/DIW_TF_Implementation | ['depth estimation'] | ['Single-Image Depth Perception in the Wild'] | evaluate.py relative_depth_diw.py _count_correct _classify reset_record evaluate relative_loss_criterion inception feed_forward load_images Channel2 Channel4 Channel3 feed_forward_2 load_targets Channel1 relative_loss hourglass _classify range len _count_correct min reset_record float range hourglass abs to_float value Variable slice squeeze stack gather_nd range list concatenate len resize append expand_dims array range open append list range len | # DIW_TF_Implementation TensorFlow implementation of Single-Image Depth Perception in the Wild: https://arxiv.org/pdf/1604.03901.pdf <img src="https://user-images.githubusercontent.com/5975007/97656096-a0016680-1a34-11eb-886a-8e5282c2a3cd.png" alt="diw"/> Tested on TensorFlow 1.8 Usage: 1. Download DIW dataset 2. Change 'DIW_PATH' in relative_depth_diw.py accordingly 3. Start training with python relative_depth_diw.py | 1,078 |
TysonYu/Laysumm | ['document summarization'] | ['Dimsum @LaySumm 20: BART-based Approach for Scientific Document Summarization'] | BART/inference_laysumm.py BART/preprocess.py BART/preprocess_sent_label.py BART/others/utils.py BART/cal_rouge.py BART/train.py BART/module/multi_task_model.py BART/others/optimizer.py BART/others/logging.py BART/trainer.py test_rouge process rouge_results_to_str chunks get_part_intro get_part_conclu get_full_intro get_abs get_full_conclusion tokenize multi_news_builder MultiNewsReader MultiNewsDataset multi_news_builder calrouge calLabel MultiNewsReader rouge_eval MultiNewsDataset make_log_file_name load_dataloader load_model evaluation_multi train_multi evaluation make_file_name train make_padding_mask PretrainedBartModel LayerNorm _get_shape multi_task_model _make_linear_from_emb _check_shapes shift_tokens_right SinusoidalPositionalEmbedding SelfAttention invert_mask _reorder_buffer DecoderLayer _prepare_bart_decoder_inputs LearnedPositionalEmbedding BartForQuestionAnswering BartModel BartDecoder BartEncoder EncoderLayer BartClassificationHead fill_with_neg_inf _filter_out_falsey_values BartForSequenceClassification init_logger Optimizer build_optim load initialize_weights count_parameters get_mask get_lens pad_sents get_max_len save tile neginf batch_generator format Rouge155 output_to_dict strftime localtime convert_and_evaluate mkdir range len range len int list map chunks append Pool range enumerate len join readlines open append join readlines open append join readlines open append join readlines open join print readlines append open from_pretrained str percentage print MultiNewsReader minor_data data_path max_len DataLoader save data_name MultiNewsDataset mode compute fmeasure add compute fmeasure add join deepcopy calrouge rouge_eval append max range len percentage pre_trained_src data_name log_file load percentage str format info minor_data data_name dataset len from_pretrained load pre_trained_lm format train_from customiza_model load_state_dict info saving_path model clip_grad_norm_ zero_grad save data_name cuda clip accumulation_steps range epoch format info make_file_name learning_rate backward parameters evaluation step makedirs from_pretrained test_rouge process_num info generate readlines tqdm data_name rouge_results_to_str cuda open percentage format saving_path pre_trained_src data_name saving_path clip_grad_norm_ zero_grad save data_name cuda clip evaluation_multi accumulation_steps range epoch format info make_file_name learning_rate backward parameters train step makedirs from_pretrained test_rouge process_num info generate readlines tqdm data_name rouge_results_to_str cuda open pad_token_id shift_tokens_right size make_padding_mask invert_mask to data shape Linear squeeze unsqueeze clone eq index_select items is_available setFormatter getLogger addHandler StreamHandler Formatter setLevel INFO FileHandler items list Optimizer set_parameters named_parameters lr max_grad_norm load_state_dict optim is_tensor cuda values dump makedirs close dirname open close open get_lens min append max enumerate append get_lens min max max xavier_uniform_ data list view size contiguous range len | # Laysumm ### This repo is for our EMNLP 2020 SDP workshop paper "Dimsum @LaySumm 20: BART-based Approach for Scientific Document Summarization" https://arxiv.org/abs/2010.09252 | 1,079 |
UC-MACSS/persp-analysis_A18 | ['word embeddings'] | ['The Geometry of Culture: Analyzing Meaning through Word Embeddings'] | Assignments/A7/get_r.py Assignments/A7/test_r.py Assignments/A1/DescrStat.py Lab_Notes/Lab_3/example_PS.py DescrStat get_r test_get_rvals test_get_r pareto_wealth_sim print loadtxt variance mean std get_r get_r array seed exp log zeros pareto range | # MACS 30000: Perspectives on Computational Analysis (Autumn 2018) | | [Dr. Richard Evans](https://sites.google.com/site/rickecon/) | [Joshua G. Mausolf](http://jmausolf.github.io/) (TA) | [Nora Nickels](https://voices.uchicago.edu/nnickels/) (TA) | |--------------|----------------------------------------------------|-------------------------------------------------------|------------------------------------------------------| | Email | [email protected] | [email protected] | [email protected] | | Office | 208 McGiffert House | 204 McGiffert House | 205 McGiffert House | | Office Hours | Tu 10:30a-12:30p | M 1:30p-3:00p | W 2:00p-4:00p | | GitHub | [rickecon](https://github.com/rickecon) | [jmausolf](https://github.com/jmausolf) | [nnickels](https://github.com/nnickels) | * **Meeting day/time**: MW 11:30a-1:20p, 247 Saieh Hall for Economics * **Lab session**: W 4:30-5:20p, 247 Saieh Hall for Economics * Office hours also available by appointment | 1,080 |
UCLBrain/LearnNoisyLabels_Public | ['medical image segmentation'] | ['Disentangling Human Error from the Ground Truth in Segmentation of Medical Images'] | adamW.py Segmentation.py Models.py Train_punet.py preprocessing/Prepare_BRATS.py Run.py Train_GCM.py Train_unet.py data_simulation/artificial_wrong_mask.py preprocessing/Prepare_MNIST.py Train_ours.py preprocessing/Prepare_BRATS_noisy_label.py Utilis.py Loss.py AdamW noisy_label_loss dice_loss noisy_label_loss_low_rank conv_block cm_layers low_rank_cm_layers ProbabilisticUnet AxisAlignedConvGaussian UpConvBlock UNet_GlobalCMs double_conv Encoder UNet Fcomb Unet gcm_layers DownConvBlock UNet_CMs segmentation getData trainSingleModel trainGCMModels getData trainSingleModel trainModels train_punet getData trainSingleModel trainUnet evaluate_noisy_label_3 save_mask_prediction_example evaluate_noisy_label evaluate_noisy_label_5 CustomDataset truncated_normal_ calculate_cm generalized_energy_distance test_punet evaluate_noisy_label_2 evaluate_noisy_label_7 evaluate_punet test init_weights l2_regularisation evaluate_noisy_label_4 segmentation_scores init_weights_orthogonal_normal CustomDataset_punet preprocessing_accuracy evaluate_noisy_label_6 evaluate unzip_all prepare_data chunks main_loop single_loop delete_all generate_patches unzip_all prepare_data chunks main_loop single_loop delete_all generate_patches generate_patches divide_data chunks main_loop view size zip sum long bmm view size contiguous zip to sum long enumerate sum view model device max str squeeze calculate_cm append to range imsave asarray size eval mkdir enumerate load repeat zeros numpy str UNet_GlobalCMs trainSingleModel getData range DataLoader CustomDataset_punet noisy_label_loss zero_grad save device evaluate_noisy_label_5 max open str view squeeze step default_timer calculate_cm append to sum range imsave model_seg SummaryWriter format asarray size close eval evaluate_noisy_label_4 mkdir segmentation_scores noisy_label_loss_low_rank add_scalars enumerate bmm evaluate_noisy_label_6 backward print AdamW reshape contiguous write parameters repeat zeros train numpy str trainSingleModel getData range UNet_CMs zero_grad DataLoader device forward str Adam to range ProbabilisticUnet test_punet evaluate_punet mkdir item CustomDataset_punet enumerate elbo backward print parameters train step len str UNet trainSingleModel getData range CustomDataset model dice_loss Adam param_groups test softmax evaluate sigmoid generalized_energy_distance model1 eval segmentation_scores model2 append to numpy max enumerate generalized_energy_distance model1 reshape size eval numpy segmentation_scores model2 append to sum max enumerate generalized_energy_distance reshape size eval numpy segmentation_scores model1 append to sum max enumerate generalized_energy_distance view size eval numpy segmentation_scores model1 append to sum max enumerate bmm generalized_energy_distance view size eval numpy segmentation_scores model1 append to sum max enumerate bmm generalized_energy_distance model1 view size eval numpy segmentation_scores model2 append to sum max enumerate generalized_energy_distance view size eval numpy segmentation_scores model1 append to sum max enumerate eval eval squeeze add_ copy_ shape normal_ kaiming_normal_ weight weight orthogonal_ parameters norm imshow savefig str device forward open str shape append to range imsave generalized_energy_distance close eval mkdir segmentation_scores sample float enumerate print write numpy generalized_energy_distance float eval numpy segmentation_scores device sample to forward range append enumerate ones_like zeros_like copy where histogram sum len ones_like asarray zeros_like astype where numpy confusion_matrix view range len print join listdir replace print join remove listdir random floor count_nonzero str shape ceil imsave range asarray concatenate get_fdata mean splitext unique listdir load join int T print sort reshape std split str int list len chunks choice listdir makedirs list generate_patches set single_loop prepare_data list shuffle chunks listdir len makedirs where generate_patches makedirs | [](LICENSE.md) This repository contains a PyTorch implementation of the NeurIPS 2020 paper ["Disentangling Human Error from the Ground Truth in Segmentation of Medical Images", 2020](https://arxiv.org/pdf/2007.15963.pdf). [Mou-Cheng Xu](https://moucheng2017.github.io/) is the main developer of the Python code; [Le Zhang](https://cheonglok.github.io/l.zhang/) is the main developer of the data simulation code. # How to use our code for further research We recommend to try the toy-example in [MNIST_example.ipynb](https://github.com/moucheng2017/Learn_Noisy_Labels_Medical_Images/blob/master/MNIST_example.ipynb) to understand the pipeline, this is a simplied main function for MNIST, similar to other main functions in [Train_GCM.py](https://github.com/moucheng2017/Learn_Noisy_Labels_Medical_Images/blob/master/Train_GCM.py), [Train_ours.py](https://github.com/moucheng2017/Learn_Noisy_Labels_Medical_Images/blob/master/Train_ours.py), [Train_puunet.py](https://github.com/moucheng2017/Learn_Noisy_Labels_Medical_Images/blob/master/Train_punet.py) and [Train_unet.py](https://github.com/moucheng2017/Learn_Noisy_Labels_Medical_Images/blob/master/Train_unet.py). 1. If you want to apply our code on your own medical data-sets: Following MNIST_example.ipynb, you might want to replace the data-loader with your own data-loader for your preferred pre-processing. An example for a data-loader can be found in [Utilis.py](https://github.com/moucheng2017/Learn_Noisy_Labels_Medical_Images/blob/master/Utilis.py), namely CustomDataset_punet. 2. If you want to plug-in the proposed loss function and play around: The loss function is implemented in Loss.py as noisy_label_loss. 3. If you want to adapt our Run.py for your application, you need to prepare data stored in a specific way: | 1,081 |
UCSB-VRL/Purdue3DCell | ['cell segmentation'] | ['Accurate 3D Cell Segmentation using Deep Feature and CRF Refinement'] | losses.py utils/parser.py utils/__init__.py utils/criterions.py hist_match.py celldataset.py groupnorm.py group.py predict.py postprocessing.py main.py model.py GroupNorm3d GroupNorm2d _GroupNorm group_norm GroupNorm3d GroupNorm2d _GroupNorm group_norm main hist_match Recall_img Precision_img Recall F1_score_img F1_score Precision main train AverageMeter adjust_learning_rate class_avg dice_loss dice mse_f1 crossentropy Parser repeat float64 astype shape unique interp ravel str cell_testing concatenate astype __len__ WriteImage GetImageFromArray hist_match append cell_training range enumerate argmax size sum range len argmax size sum range len Precision Recall size sum size sum Precision_img Recall_img DataLoader adjust_learning_rate save cuda seed len Adam getattr format start_epoch info manual_seed Modified3DUNet join criterion parameters train epochs gpu split update time data_parallel criterion backward set_grad_enabled AverageMeter zero_grad item step cuda enumerate param_groups sum float softmax range numel softmax float sum range sigmoid mean sum unsqueeze | # 3D UNet Cell Segmentation ## This is for 3D membrane tagged image segmentation ### Prerequisites SimpleITK, pytorch, scikit-image, opencv, DenseInferenceWrapper ### Running the tests python hist_match.py python predict.py python postprocessing.py All this repo has been moved to [3DPavementCell](https://github.com/UCSB-VRL/3DPavementCell) | 1,082 |
UKPLab/acl2020-empowering-active-learning | ['active learning'] | ['Empowering Active Learning to Jointly Optimize System and User Demands'] | train_model.py user_simulation/__init__.py active_learning/__init__.py readers/__init__.py user_simulation/simulated_learner.py get_data main flatten_data InstanceSampler get_label load_data Simulated_Learner UserQuery append stack extend InstanceSampler best_model get_data uncertainty sample_instance flatten_data ArgumentParser accuracy_score Input update_lambda abs open seed set_user_goal AccuracyScore Model finish_iteration append parse_args range set_lambda lambda_schedule results set_sampling_strategy format al_iterations get_sampled_instances sample_class set_pred_probs test UserQuery close init_weights load_weights info get_ctest_class sampling_strategy compile deepcopy int join print add_argument write fit extend history load_data set_uncertainty_comp summary train len | # Empowering Active Learning toJointly Optimize System and User Demands ### Ji-Ung Lee, Christian M. Meyer, and Iryna Gurevych #### [UKP Lab, TU Darmstadt](https://www.informatik.tu-darmstadt.de/ukp/ukp_home/index.en.jsp) Source code and user models from our experiments of our [ACL 2020 article](https://www.aclweb.org/anthology/2020.acl-main.390/). > **Abstract:** Existing approaches to active learning maximize the system performance by sampling unlabeled instances for annotation that yield the most efficient training. However, when active learning is integrated with an end-user application, this can lead to frustration for participating users, as they spend time labeling instances that they would not otherwise be interested in reading. In this paper, we propose a new active learning approach that jointly optimizes the seemingly counteracting objectives of the active learning system (training efficiently) and the user (receiving useful instances). We study our approach in an educational application, which particularly benefits from this technique as the system needs to rapidly learn to predict the appropriateness of an exercise to a particular user, while the users should receive only exercises that match their skills. We evaluate multiple learning strategies and user types with data from real users and find that our joint approach better satisfies both objectives when alternative methods lead to many unsuitable exercises for end users. * **Contact person:** Ji-Ung Lee, [email protected] * UKP Lab: http://www.ukp.tu-darmstadt.de/ * TU Darmstadt: http://www.tu-darmstadt.de/ Drop me a line or report an issue if something is broken (and shouldn't be) or if you have any questions. For license information, please see the LICENSE and README files. | 1,083 |
UKPLab/argotario | ['argument mining'] | ['Adapting Serious Game for Fallacious Argumentation to German: Pitfalls, Insights, and Best Practices'] | argueserver/plugins/domains.py argueserver/Config.py argueserver/plugins/users.py admintools/handle_spam_reports.py argueserver/argserver.py argueserver/front.py argueserver/plugins/events.py argueserver/plugins/topics.py argueserver/plugins/arguments.py argueserver/plugins/sessions.py argueserver/plugins/spamReports.py argueserver/mace.py argueserver/startServer.py argueserver/get-pip.py argueserver/plugins/pointsOverTime.py argueserver/requesthandler.py argueserver/interfaces.py argueserver/plugins/feedback.py argueserver/plugins/__init__.py argueserver/plugins/languages.py admintools/export_arguments.py argueserver/plugins/fallacies.py duringFallacyComposition displayArgumentFR duringJudgeOrSession loadFallacies promptDecisionFR deleteReport displayArgumentsJS duringFallacyInformation deleteArgument changeGoldLabel duringFallacyRecognition promptDecisionJS HTTPServer Screen main bootstrap Plugin Processor REST signal_handler Arguments getHandler getHandler Domains Events getHandler Fallacies getHandler SpamReports getHandler Languages getHandler getHandler PointsOverTime Sessions getHandler SpamReports getHandler getHandler Topics Users getHandler list find remove remove update print int print eval deleteReport deleteArgument input changeGoldLabel print str enumerate str int print len eval deleteReport deleteArgument input enumerate join mkdtemp exit from_line main insert join mkdtemp bootstrap server_close name write shutdown destroy | # Argotario **Argotario** is a serious game that deals with fallacies in everyday argumentation. **Argotario** is a multilingual, open-source, platform-independent application with strong educational aspects, accessible at www.argotario.net. The paper is available in the [ACL Anthology](http://www.aclweb.org/anthology/D17-2002). Please use the following citation: ``` @InProceedings{Habernal.et.al.2017.EMNLP, author = {Habernal, Ivan and Hannemann, Raffael and Pollak, Christian and Klamm, Christopher and Pauli, Patrick and Gurevych, Iryna}, title = {Argotario: Computational Argumentation Meets Serious Games}, booktitle = {Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations}, month = sep, | 1,084 |
UKPLab/argument-reasoning-comprehension-task | ['common sense reasoning'] | ['The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit Warrants'] | experiments/src/main/python/data_loader.py corpus-creation/src/main/python/computeSimilarity.py experiments/src/main/python/attention_lstm.py experiments/src/main/python/main.py experiments/src/main/python/vocabulary_embeddings_extractor.py experiments/src/main/python/models.py corpus-creation/src/main/python/skip-thoughts/skipthoughts.py experiments/src/main/python/convert_word2vec_bin_to_txt.py get_similarity find_most_dissimilar_reasons get_random_reasons save_result_to_file main init_params_bi build_encoder param_init_gru load_model encode load_params init_params load_tables gru_layer ortho_weight preprocess get_layer _p word_features nn_words norm_weight init_tparams build_encoder_bi nn AttentionLSTM AttentionLSTMWrapper __main__ load_single_file __main__ string_to_indices load_single_instance_from_line get_predicted_labels print_error_analysis_dev __main__ get_attention_lstm_intra_warrant get_attention_lstm prepare_word_embeddings_cache extract_word_and_vector_from_glove_file_line load_word_frequencies_and_embeddings load_cached_vocabulary_and_embeddings dictionary_and_embeddings_to_indices extract_embeddings_vectors_for_given_words extract_word_and_vector_from_word2vec_file_line tokenize load_vocabulary_frequencies cosine_similarity print reshape encode float get_similarity sorted list items print dict list print choice dict keys print close seed get update print find_most_dissimilar_reasons len get_random_reasons dict save_result_to_file open split load_tables function init_tparams print load_params build_encoder_bi init_params_bi build_encoder init_params load list strip close OrderedDict zip append open list defaultdict print ones len preprocess append zeros keys range enumerate load word_tokenize tokenize append encode flatten print enumerate list norm zeros keys range len list print flatten keys enumerate OrderedDict shared list items load items list warn OrderedDict norm_weight OrderedDict norm_weight matrix tensor3 matrix tensor3 concatenate svd randn uniform ortho_weight norm_weight astype ortho_weight concatenate dot scan alloc load_word2vec_format save_word2vec_format tokenize int string_to_indices split load_single_instance_from_line endswith readlines append open load_single_file getcwd load_cached_vocabulary_and_embeddings argmax array tolist seed print get_predicted_labels accuracy set dict add enumerate zip append get_attention_lstm_intra_warrant range predict fit print strip readlines open split get asarray plot print Lambda max_pool_lambda_layer Model Input compile merge get asarray plot print Lambda max_pool_lambda_layer Model Input compile merge append strip TweetTokenizer get isinstance strip dict split splitext isfile tokenize open partition float astype split extract_word_and_vector_from_glove_file_line endswith dict extract_word_and_vector_from_word2vec_file_line open dump print extend BZ2File extract_embeddings_vectors_for_given_words load_vocabulary_frequencies load BZ2File get sorted rand dict enumerate len print dictionary_and_embeddings_to_indices load_word_frequencies_and_embeddings | # The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit Warrants Source code, data, and supplementary materials for our NAACL 2018 paper and for SemEval 2018 shared task. Use the following citation if you use any of the code or the data set: ``` @InProceedings{Habernal.et.al.2018.NAACL.ARCT, title = {The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit Warrants}, author = {Habernal, Ivan and Wachsmuth, Henning and Gurevych, Iryna and Stein, Benno}, publisher = {Association for Computational Linguistics}, | 1,085 |
UKPLab/iwcs2017_disambiguation_causality_lexical_markers | ['word embeddings'] | ['Neural Disambiguation of Causal Lexical Markers Based on Context'] | src/model/factory_word_embeddings.py src/model/abstract_corpus.py src/model/nlp_processing.py src/model/abstract_classification_strategy.py src/model/abstract_padding_strategy.py src/model/utils/classification_strategy_names.py src/model/alltlex_sentence_encoding_lexical_marker_in_second_argument_lstm.py src/model/utils/app_properties.py src/model/utils/Errors.py src/model/factory_padding_strategies.py src/model/__init__.py src/model/abstract_document.py src/model/document_altlex.py src/model/padding_mean.py src/model/factory_corpus.py src/model/utils/semantic_relations_corpus_altlex.py src/model/utils/backprogragation_optimizers_creator_method_name.py src/model/utils/app_properties_manager.py src/model/corpus_causal_altlex_train.py src/model/alltlex_sentence_encoding_lexical_marker_in_second_argument_lstm_stated.py src/model/factory_classification_strategy.py src/model/corpus_causal_altlex_dev.py src/model/model_pipeline_classification.py src/model/abstract_word_embedding.py src/semantic_relation_classification_console_app.py src/model/abstract_meta_exp_data.py src/model/glove_word_embedings.py src/model/padding_mode.py src/model/utils/word_embeddings_type.py src/model/utils/meta_corpus_type_names.py src/model/utils/__init__.py src/model/utils/padding_strategies_name.py src/model/utils/corpora_names.py src/model/factory_meta_corpus.py src/model/padding_maximum.py src/model/altlex_meta_exp_sentence_encoding_lexical_marker.py src/model/corpus_causal_altlex_test.py src/model/factory_backprogation_optimizer.py configuration_of_logging logging_levels_value script_input_arguments AbstractClassificationStrategy AbstractCorpus AbstractDocument AbstractMetaExpData AbstractPaddingStrategy AbstractWordEmbedding AltLexSentenceEncodingLexicalMarkerInSecondArgumentLSTM AltLexSentenceEncodingLexicalMarkerInSecondArgumentLSTMStated AltLexMetaExpSentenceEncodingLexicalMarker CorpusCausalAltLexDev CorpusCausalAltLexTest CorpusCausalAltLexTrain DocumentAltLex FactoryBackpropagationOptimizer FactoryClassificationStrategy FactoryCorpus FactoryMetaCorpus FactoryPaddingStrategies FactoryWordEmbeddings GloveWordEmbednigs ModelPipelineClassification NLPProcessing PaddingMaximum PaddingMean PaddingMode AppProperties AppPropertiesManager BackpropagationOptimizersCreatorMethodName ClassificationStrategyNames CorporaName Error NonWordEmbeddingPath NonCorpusTypeException NonWordEmbeddingsHandleType NonClassificationStrategyTypeException ThresholdErrorInPercentageOfDocuments NonMetaCorpusType NonMetaCorpusDefined RelevantSegmetnKeyError MetaCorpusTypeNames PaddingStrategiesName SemanticRelationsCorpusAltLex WordEmbeddingsType I compile add_argument ArgumentParser basicConfig log_level upper getattr log_file compile | # Disambiguation of Neural Disambiguation of Causal Lexical Markers Based on Context Causation is part of the human mental model of reasoning, so it is crucial the understanding of causal relations in text in order to build the causal reasoning expressed by an human in a piece of text. This work is focused on the disambiguation of the causal meaning of lexical markers or discurse connectives. The disambiguation system is based on deep learning and the use of dense vector spaces to represent the meaning of the input utterances. The data used is the AltLex Corpus (Hidey and McKeown, 2016) ([repository](https://github.com/chridey/altlex "repository of AltLex corpus"), [paper](https://www.aclweb.org/anthology/P/P16/P16-1135.pdf "paper of AltLex corpus")), and the results reached outperform the state-of-the-art on that corpus. Please use the following citation: ``` @InProceedings{martinezCamara:2017:iwcs2017, author = {Mart{\'i}nez C{\'a}mara, Eugenio and Shwartz, Vared and Gurevych, Iryna and Dagan, Ido}, title = {Neural Disambiguation of Causal Lexical Markers Based on Context}, booktitle = {Proceedings of the 12th International Conference on Computational Semantics (IWCS 2017)}, month = September, year = {2017}, | 1,086 |
UKPLab/naacl2019-argument-annotations | ['argument mining'] | ['A Streamlined Method for Sourcing Discourse-level Argumentation Annotations from the Crowd'] | src/main/python/add_review_texts.py main parse open str time parse join isdir unescape print exit write set mkdir getroot copy2 find | # A streamlined method for sourcing discourse-level argumentation annotations from the crowd This project contains source code for a discourse-level argument annotation pipeline, as well as crowdsourced data applied to a subset of Julian McAuley's [Amazon product data](http://jmcauley.ucsd.edu/data/amazon/index.html). If you reuse this software or data, please use the following citation: > Tristan Miller, Maria Sukhareva, and Iryna Gurevych. [A streamlined > method for sourcing discourse-level argumentation annotations from > the crowd](https://www.aclweb.org/anthology/N19-1177). In > [Proceedings of the 17th Annual Conference of the North American | 1,087 |
UMBCvision/Universal-Litmus-Patterns | ['traffic sign recognition'] | ['Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs'] | tiny-imagenet/generate_poison.py tiny-imagenet/train_ULP.py CIFAR-10/plot_ROC_curves.py CIFAR-10/utils/stn.py CIFAR-10/utils/augmentation.py CIFAR-10/utils/model.py tiny-imagenet/evaluate_ULP.py CIFAR-10/train_poisoned_model.py CIFAR-10/evaluate_ULP.py tiny-imagenet/train_poisoned_model.py CIFAR-10/utils/cache.py tiny-imagenet/convert_data.py tiny-imagenet/train_clean_model.py CIFAR-10/utils/download_CIFAR10.py tiny-imagenet/plot_ROC_curves.py tiny-imagenet/data_cleaning.py CIFAR-10/utils/dataset.py CIFAR-10/generate_poison.py CIFAR-10/evaluate_noise.py CIFAR-10/utils/__init__.py CIFAR-10/utils/backdoor_attack.py CIFAR-10/train_clean_model.py CIFAR-10/train_ULP.py tiny-imagenet/evaluate_noise.py tiny-imagenet/resnet.py CIFAR-10/utils/data_loader_CIFAR10.py getLogit stretch_value StratifiedSampler CIFAR10 dataset_append StratifiedSampler custom_Dataset CIFAR10 CIFAR10 translate_image projection_transform rotate_image augment_and_balance_data transform_image generate_poisoned_data add_patch cache ExpensiveClass expensive_function convert_numpy2pickle load_cached one_hot_encoded DataSet load_class_names _load_data load_test_data _get_file_path load_CIFAR10 _convert_images load_training_data _unpickle maybe_download_and_extract _print_download_progress download maybe_download_and_extract CNN_classifier STN generate_poisoned_data save_image add_patch resnet18_mod conv1x1 resnext50_32x4d wide_resnet50_2 ResNet resnet50 resnext101_32x8d Bottleneck resnet152 wide_resnet101_2 conv3x3 _resnet resnet34 resnet18 BasicBlock resnet101 view matmul rgb2hsv astype equalize_hist cat custom_Dataset uniform rotate AffineTransform uniform estimate uniform ProjectiveTransform array warp translate_image shape projection_transform rotate_image print reshape shape unique zip append empty full range len shape astype stack astype argwhere print fn exists load max print cache print _get_file_path reshape transpose array array _unpickle _convert_images zeros len range _load_data _load_data load_class_names load_test_data load_training_data maybe_download_and_extract makedirs format min write float flush print join urlretrieve makedirs join urlretrieve print endswith extractall makedirs imwrite ResNet load_state_dict load_state_dict_from_url | # Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs <p align="center"> <img width="750" src=https://github.com/UMBCvision/Universal-Litmus-Patterns/blob/master/docs/assets/images/teaser.png> </p> ### Abstract The unprecedented success of deep neural networks in many applications has made these networks a prime target for adversarial exploitation. In this paper, we introduce a benchmark technique for detecting backdoor attacks (aka Trojan attacks) on deep convolutional neural networks (CNNs). We introduce the concept of Universal Litmus Patterns (ULPs), which enable one to reveal backdoor attacks by feeding these universal patterns to the network and analyzing the output (i.e., classifying the network as ‘clean’ or ‘corrupted’). This detection is fast because it requires only a few forward passes through a CNN. We demonstrate the effectiveness of ULPs for detecting backdoor attacks on thousands of networks with different architectures trained on four benchmark datasets, namely the German Traffic Sign Recognition Benchmark (GTSRB), MNIST, CIFAR10, | 1,088 |
UW-AMO/TimeSeriesES-Cell | ['time series analysis', 'time series'] | ['Time Series Using Exponential Smoothing Cells'] | RobustExponentialSmoothing.py ESModels.py ESModels window_func RobustES | # TimeSeriesES-Cell This repository contains code for time series analysis, developed in the paper: [Time Series Using Exponential Smoothing Cells](https://arxiv.org/abs/1706.02829), by [Avner Abrami](https://www.linkedin.com/in/avnerabrami/), [Aleksandr Aravkin](https://sites.google.com/site/saravkin/), and [Younghun Kim](https://www.linkedin.com/in/younghun-kim-20441249/). ## Overview Exponential smoothing (ES) techniques such as the Holt-Winters model, break down in challenging situations, including * high level of noise * large or frequent outliers * significant portions of missing data * nonstationary features. | 1,089 |
UltraSuite/ultrasuite-kaldi | ['word alignment', 'speaker diarization'] | ['Ultrasound tongue imaging for diarization and alignment of child speech therapy sessions'] | diarization/v1/local/align/merge-short-segments.py diarization/v1/local/data/make_tongue_activity.py diarization/v1/local/data/append_tongue_activity.py diarization/v1/local/align/score-alignment.py diarization/v1/local/data/train-uxtd.py diarization/v1/local/data/utils.py diarization/v1/local/align/ctm-to-lab.py diarization/v1/local/align/lab2tg.py diarization/v1/local/data/decode-uxtd.py diarization/v1/local/data/decode-uxssd-upx.py convert_ctm read_phones main lab2tg main main correct_alignment write main read_annotation main upsample read_tongue_activity downsample main main read_filelist estimate_tongue_activity filelist_by_speaker main write_to_file main read_speaker_map write_data get_duration join write close open float split input split join read_phones format isfile input len makedirs convert_ctm warning info append keys split format Textgrid IntervalTier print save addTier lab2tg replace print listdir close open format print write append to_seconds split correct_alignment Annotation isfile abs max open sorted IdentificationErrorRate exit IdentificationRecall ier_eval sum range prec_eval close der_eval read_annotation rec_eval write DiarizationErrorRate min labels IdentificationPrecision append enumerate join listdir int remove concatenate upsample reshape read_tongue_activity downsample system read_mat_scp get_duration write_data upper read_speaker_map append close write open int join format replace concatenate print reshape size close shape fromfile zeros MinMaxScaler fit_transform range append open read_filelist filelist_by_speaker cpu_count map write_to_file Pool endswith list sorted write close set open communicate format Popen split append isfirstline input split | # Ultrasuite Kaldi This repository contains Kaldi recipes for the [UltraSuite repository](<https://ultrasuite.github.io/>). | 1,090 |
Unity-Technologies/ml-agents | ['unity'] | ['Unity: A General Platform for Intelligent Agents'] | ml-agents-envs/mlagents_envs/communicator_objects/capabilities_pb2.py ml-agents-envs/mlagents_envs/communicator_objects/command_pb2.py ml-agents/mlagents/trainers/environment_parameter_manager.py ml-agents/mlagents/trainers/cli_utils.py ml-agents/mlagents/trainers/run_experiment.py ml-agents/mlagents/trainers/tests/check_env_trains.py ml-agents-envs/mlagents_envs/mock_communicator.py ml-agents/mlagents/trainers/policy/__init__.py ml-agents-envs/mlagents_envs/communicator_objects/unity_to_external_pb2.py ml-agents/mlagents/trainers/tests/torch/test_reward_providers/test_extrinsic.py ml-agents/mlagents/trainers/torch/action_log_probs.py ml-agents/mlagents/trainers/tests/torch/saver/test_saver_reward_providers.py ml-agents-envs/mlagents_envs/communicator.py ml-agents/mlagents/trainers/torch/components/reward_providers/extrinsic_reward_provider.py gym-unity/gym_unity/envs/__init__.py ml-agents-envs/mlagents_envs/communicator_objects/brain_parameters_pb2.py ml-agents/mlagents/trainers/learn.py ml-agents-envs/mlagents_envs/side_channel/raw_bytes_channel.py ml-agents/mlagents/trainers/trainer/trainer.py gym-unity/gym_unity/__init__.py ml-agents-envs/mlagents_envs/side_channel/__init__.py ml-agents/mlagents/trainers/torch/components/reward_providers/gail_reward_provider.py utils/validate_meta_files.py ml-agents/mlagents/trainers/torch/action_model.py ml-agents/mlagents/trainers/tests/torch/test_hybrid.py ml-agents/mlagents/trainers/trainer_controller.py ml-agents/mlagents/trainers/torch/utils.py ml-agents/mlagents/trainers/action_info.py ml-agents/mlagents/trainers/tests/torch/test_ppo.py ml-agents/mlagents/trainers/directory_utils.py ml-agents/mlagents/trainers/model_saver/torch_model_saver.py ml-agents/mlagents/trainers/torch/attention.py ml-agents-envs/setup.py ml-agents-envs/mlagents_envs/side_channel/engine_configuration_channel.py ml-agents-envs/mlagents_envs/communicator_objects/unity_rl_output_pb2.py ml-agents/mlagents/trainers/tests/mock_brain.py utils/generate_markdown_docs.py ml-agents/mlagents/trainers/policy/checkpoint_manager.py ml-agents/mlagents/trainers/tests/test_trainer_controller.py ml-agents/mlagents/trainers/torch/conditioning.py ml-agents/mlagents/trainers/torch/layers.py ml-agents-envs/mlagents_envs/rpc_utils.py ml-agents/mlagents/trainers/tests/torch/test_encoders.py ml-agents/mlagents/trainers/tests/torch/test_policy.py ml-agents/mlagents/trainers/tests/test_torch_utils.py ml-agents-envs/mlagents_envs/communicator_objects/unity_rl_initialization_output_pb2.py ml-agents/setup.py ml-agents-envs/mlagents_envs/side_channel/incoming_message.py ml-agents/mlagents/trainers/torch/model_serialization.py ml-agents/mlagents/trainers/policy/torch_policy.py ml-agents/tests/yamato/setup_venv.py utils/run_markdown_link_check.py ml-agents/mlagents/trainers/env_manager.py ml-agents/mlagents/trainers/ppo/trainer.py ml-agents/mlagents/trainers/policy/policy.py ml-agents-envs/mlagents_envs/communicator_objects/agent_action_pb2.py ml-agents/mlagents/trainers/sac/optimizer_torch.py ml-agents-envs/mlagents_envs/tests/test_rpc_communicator.py ml-agents/mlagents/trainers/torch/networks.py ml-agents-envs/mlagents_envs/communicator_objects/training_analytics_pb2.py ml-agents-envs/mlagents_envs/tests/test_envs.py ml-agents/mlagents/trainers/tests/dummy_config.py ml-agents-envs/mlagents_envs/side_channel/float_properties_channel.py utils/validate_inits.py ml-agents/mlagents/trainers/torch/components/reward_providers/__init__.py ml-agents/mlagents/trainers/simple_env_manager.py ml-agents/mlagents/plugins/__init__.py ml-agents-envs/mlagents_envs/side_channel/outgoing_message.py ml-agents-envs/mlagents_envs/exception.py ml-agents/mlagents/trainers/tests/torch/test_networks.py ml-agents-envs/mlagents_envs/registry/remote_registry_entry.py ml-agents/mlagents/trainers/trainer/__init__.py ml-agents/mlagents/trainers/tests/torch/test_attention.py ml-agents/mlagents/trainers/upgrade_config.py ml-agents-envs/mlagents_envs/communicator_objects/unity_message_pb2.py ml-agents/mlagents/trainers/tests/torch/test_reward_providers/test_curiosity.py ml-agents/mlagents/trainers/tests/test_learn.py ml-agents/mlagents/torch_utils/__init__.py ml-agents/tests/yamato/scripts/run_gym.py utils/validate_release_links.py ml-agents/mlagents/trainers/torch/action_flattener.py ml-agents/mlagents/trainers/training_analytics_side_channel.py ml-agents-envs/mlagents_envs/communicator_objects/agent_info_pb2.py ml-agents/mlagents/trainers/tests/test_demo_loader.py ml-agents/mlagents/trainers/tests/torch/test_reward_providers/test_gail.py ml-agents-envs/mlagents_envs/communicator_objects/observation_pb2.py utils/validate_versions.py ml-agents-envs/mlagents_envs/tests/test_rpc_utils.py ml-agents/mlagents/trainers/torch/decoders.py ml-agents/mlagents/trainers/__init__.py ml-agents/mlagents/trainers/tests/torch/test_poca.py ml-agents/mlagents/trainers/torch/components/reward_providers/reward_provider_factory.py ml-agents/mlagents/trainers/tests/test_trainers.py ml-agents-envs/mlagents_envs/side_channel/default_training_analytics_side_channel.py ml-agents-envs/mlagents_envs/tests/test_timers.py ml-agents/mlagents/trainers/tests/torch/test_bcmodule.py ml-agents/mlagents/torch_utils/torch.py ml-agents-envs/mlagents_envs/communicator_objects/custom_reset_parameters_pb2.py ml-agents/mlagents/trainers/tests/test_env_param_manager.py ml-agents-envs/mlagents_envs/tests/test_registry.py ml-agents-envs/mlagents_envs/communicator_objects/agent_info_action_pair_pb2.py ml-agents/mlagents/trainers/ppo/optimizer_torch.py ml-agents-envs/mlagents_envs/tests/test_set_action.py ml-agents-envs/mlagents_envs/communicator_objects/unity_rl_input_pb2.py ml-agents/mlagents/trainers/torch/agent_action.py ml-agents-envs/mlagents_envs/timers.py ml-agents/tests/yamato/check_coverage_percent.py ml-agents/mlagents/trainers/exception.py ml-agents/mlagents/trainers/tests/test_training_analytics_side_channel.py gym-unity/gym_unity/tests/test_gym.py utils/make_readme_table.py ml-agents/mlagents/trainers/buffer.py ml-agents/mlagents/torch_utils/cpu_utils.py ml-agents-envs/mlagents_envs/side_channel/side_channel.py ml-agents/mlagents/trainers/tests/torch/test_ghost.py ml-agents/mlagents/trainers/tests/test_subprocess_env_manager.py ml-agents/mlagents/trainers/tests/torch/test_simple_rl.py ml-agents/mlagents/trainers/subprocess_env_manager.py ml-agents-envs/mlagents_envs/side_channel/environment_parameters_channel.py ml-agents/mlagents/trainers/poca/optimizer_torch.py ml-agents/mlagents/trainers/tests/torch/test_reward_providers/test_rnd.py ml-agents/mlagents/trainers/tests/torch/test_sac.py ml-agents/mlagents/trainers/tests/torch/test_layers.py ml-agents/mlagents/trainers/agent_processor.py ml-agents-envs/mlagents_envs/communicator_objects/engine_configuration_pb2.py ml-agents-envs/mlagents_envs/tests/test_env_utils.py ml-agents/tests/yamato/scripts/run_compressed_sensor.py ml-agents/mlagents/trainers/tests/test_rl_trainer.py ml-agents-envs/mlagents_envs/rpc_communicator.py ml-agents/mlagents/trainers/training_status.py ml-agents-envs/mlagents_envs/communicator_objects/demonstration_meta_pb2.py ml-agents-envs/mlagents_envs/__init__.py ml-agents/mlagents/torch_utils/globals.py gym-unity/setup.py ml-agents/mlagents/trainers/torch/distributions.py ml-agents/mlagents/trainers/behavior_id_utils.py ml-agents/mlagents/trainers/tests/test_config_conversion.py ml-agents/mlagents/trainers/optimizer/__init__.py ml-agents-envs/mlagents_envs/registry/__init__.py ml-agents/mlagents/trainers/torch/encoders.py ml-agents/mlagents/trainers/tests/simple_test_envs.py ml-agents-plugin-examples/mlagents_plugin_examples/example_stats_writer.py ml-agents/mlagents/trainers/tests/torch/test_conditioning.py ml-agents/mlagents/trainers/tests/__init__.py ml-agents/mlagents/trainers/tests/torch/test_agent_action.py ml-agents-envs/mlagents_envs/communicator_objects/unity_output_pb2.py ml-agents-envs/mlagents_envs/env_utils.py ml-agents/mlagents/trainers/torch/components/bc/module.py ml-agents-envs/mlagents_envs/communicator_objects/space_type_pb2.py ml-agents/mlagents/trainers/tests/test_trainer_util.py ml-agents-envs/mlagents_envs/logging_util.py ml-agents/mlagents/trainers/sac/trainer.py ml-agents-plugin-examples/setup.py ml-agents-envs/mlagents_envs/side_channel/side_channel_manager.py ml-agents/mlagents/plugins/stats_writer.py ml-agents/tests/yamato/training_int_tests.py ml-agents/mlagents/trainers/tests/torch/test_distributions.py ml-agents/mlagents/trainers/trajectory.py ml-agents/mlagents/trainers/settings.py ml-agents-envs/mlagents_envs/communicator_objects/unity_rl_initialization_input_pb2.py ml-agents-envs/mlagents_envs/base_env.py ml-agents-envs/mlagents_envs/communicator_objects/header_pb2.py ml-agents/mlagents/trainers/tests/test_stats.py ml-agents/mlagents/trainers/model_saver/model_saver.py ml-agents/mlagents/trainers/optimizer/torch_optimizer.py ml-agents/mlagents/trainers/tests/torch/saver/test_saver.py ml-agents-envs/mlagents_envs/side_channel/stats_side_channel.py ml-agents-envs/mlagents_envs/tests/test_side_channel.py ml-agents-envs/mlagents_envs/registry/base_registry_entry.py ml-agents/mlagents/trainers/ghost/controller.py ml-agents/tests/yamato/standalone_build_tests.py ml-agents/mlagents/trainers/tests/torch/test_action_model.py ml-agents/mlagents/trainers/tests/test_training_status.py ml-agents-envs/mlagents_envs/environment.py ml-agents/mlagents/trainers/tests/torch/test_decoders.py ml-agents/mlagents/trainers/demo_loader.py ml-agents/mlagents/trainers/ghost/trainer.py ml-agents-envs/mlagents_envs/registry/binary_utils.py ml-agents/mlagents/trainers/torch/components/reward_providers/base_reward_provider.py ml-agents/mlagents/trainers/tests/torch/test_reward_providers/utils.py ml-agents/mlagents/trainers/tests/test_settings.py ml-agents/mlagents/trainers/torch/components/reward_providers/rnd_reward_provider.py ml-agents-envs/mlagents_envs/communicator_objects/unity_input_pb2.py ml-agents-envs/mlagents_envs/tests/test_steps.py ml-agents/mlagents/trainers/tests/test_buffer.py ml-agents-plugin-examples/mlagents_plugin_examples/tests/test_stats_writer_plugin.py ml-agents/mlagents/trainers/trainer/rl_trainer.py ml-agents/mlagents/trainers/tests/test_agent_processor.py ml-agents/mlagents/trainers/torch/components/reward_providers/curiosity_reward_provider.py ml-agents-envs/mlagents_envs/communicator_objects/unity_to_external_pb2_grpc.py ml-agents/tests/yamato/yamato_utils.py ml-agents/mlagents/trainers/tests/torch/test_utils.py ml-agents/mlagents/trainers/stats.py ml-agents/tests/yamato/scripts/run_llapi.py ml-agents-envs/mlagents_envs/registry/unity_env_registry.py ml-agents/mlagents/trainers/poca/trainer.py ml-agents/mlagents/trainers/tests/test_trajectory.py ml-agents/mlagents/trainers/optimizer/optimizer.py ml-agents/mlagents/trainers/trainer/trainer_factory.py VerifyVersionCommand UnityGymException ActionFlattener UnityToGymWrapper create_mock_vector_steps test_gym_wrapper_single_visual_and_vector test_gym_wrapper_multi_visual_and_vector test_gym_wrapper create_mock_group_spec test_branched_flatten setup_mock_unityenvironment test_gym_wrapper_visual test_action_space_seed test_action_space VerifyVersionCommand register_stats_writer_plugins get_default_stats_writers _read_in_integer_file _get_num_available_cpus get_num_threads_to_use get_rank assert_torch_installed default_device set_torch_config ActionInfo AgentManager AgentProcessor AgentManagerQueue BehaviorIdentifiers get_global_agent_id get_global_group_id create_name_behavior_id RewardSignalUtil ObservationKeyPrefix RewardSignalKeyPrefix BufferKey AgentBuffer AgentBufferField BufferException RaiseRemovedWarning StoreConfigFile _load_config DetectDefaultStoreTrue DetectDefault load_config _create_parser make_demo_buffer write_demo get_demo_files load_demonstration write_delimited demo_to_buffer setup_init_path validate_existing_directories _validate_init_full_path EnvironmentParameterManager EnvManager EnvironmentStep SamplerException TrainerConfigError CurriculumError CurriculumConfigError MetaCurriculumError TrainerConfigWarning CurriculumLoadingError UnityTrainerException TrainerError write_timing_tree create_environment_factory write_run_options parse_command_line run_training write_training_status main run_cli get_version_string main parse_command_line ScheduleType TrainerSettings PPOSettings ConstantSettings GaussianSettings strict_to_cls RewardSignalSettings EnvironmentSettings ParameterRandomizationType EnvironmentParameterSettings TorchSettings check_and_structure RewardSignalType TrainerType MultiRangeUniformSettings HyperparamSettings SerializationSettings check_hyperparam_schedules NetworkSettings SACSettings UniformSettings ConditioningType SelfPlaySettings RNDSettings Lesson EngineSettings EncoderType RunOptions GAILSettings CheckpointSettings BehavioralCloningSettings deep_update_dict ParameterRandomizationSettings ExportableSettings CompletionCriteriaSettings defaultdict_to_dict CuriositySettings SimpleEnvManager StatsWriter _dict_to_str StatsSummary ConsoleWriter StatsReporter GaugeWriter TensorboardWriter StatsPropertyType worker EnvironmentResponse EnvironmentRequest UnityEnvWorker StepResponse SubprocessEnvManager EnvironmentCommand TrainerController TrainingAnalyticsSideChannel StatusMetaData StatusType GlobalTrainingStatus ObsUtil Trajectory AgentExperience AgentStatus GroupObsUtil parse_args write_to_yaml_file convert convert_behaviors main remove_nones convert_samplers_and_curriculum convert_samplers GhostController GhostTrainer BaseModelSaver TorchModelSaver Optimizer TorchOptimizer TorchPOCAOptimizer lambda_return POCATrainer ModelCheckpoint ModelCheckpointManager Policy UnityPolicyException TorchPolicy TorchPPOOptimizer PPOTrainer get_gae discount_rewards TorchSACOptimizer SACTrainer default_reward_processor check_environment_trains DebugWriter create_observation_specs_with_shapes sac_dummy_config extrinsic_dummy_config ppo_dummy_config poca_dummy_config curiosity_dummy_config gail_dummy_config simulate_rollout create_mock_pushblock_behavior_specs create_mock_banana_behavior_specs create_steps_from_behavior_spec setup_test_behavior_specs create_mock_3dball_behavior_specs make_fake_trajectory create_mock_steps copy_buffer_fields RecordEnvironment MemoryEnvironment UnexpectedExceptionEnvironment clamp SimpleEnvironment MultiAgentEnvironment test_end_episode test_group_statuses _create_action_info test_agent_deletion test_agent_manager_queue test_agentprocessor test_agent_manager test_agent_manager_stats create_mock_policy test_buffer_sample construct_fake_buffer test_num_experiences assert_array test_buffer_save_load test_key_encode_decode test_agentbufferfield fakerandint test_buffer test_buffer_truncate test_convert test_convert_behaviors test_remove_nones test_unsupported_version_raises_error test_load_demo test_demo_mismatch test_edge_cases test_load_demo_dir test_sampler_conversion test_sampler_and_constant_conversion test_create_manager test_curriculum_raises_no_completion_criteria_conversion test_curriculum_no_behavior test_curriculum_conversion test_curriculum_raises_all_completion_criteria_conversion basic_options test_run_training test_yaml_args test_bad_env_path test_commandline_args test_env_args FakeTrainer create_rl_trainer RLTrainerWarningTest test_update_buffer_append test_rl_trainer test_summary_checkpoint test_advance test_clear_update_buffer check_dict_is_at_least test_environment_settings test_config_specified test_strict_to_cls test_trainersettingsschedules_structure test_memory_settings_validation test_default_settings check_if_different test_is_new_instance test_no_configuration test_env_parameter_structure test_exportable_settings test_pickle test_deep_update_dict test_trainersettings_structure test_reward_signal_structure test_tensorboard_writer test_stat_reporter_add_summary_write test_tensorboard_writer_hidden_keys test_tensorboard_writer_clear test_agent_manager_stats_report test_gauge_stat_writer_sanitize ConsoleWriterTest test_stat_reporter_property MockEnvWorker CustomTestOnlyException mock_env_factory test_subprocess_failing_step SubprocessEnvManagerTest test_subprocess_env_raises_errors create_worker_mock test_subprocess_env_endtoend test_set_torch_device test_ppo_trainer_update_normalization test_poca_trainer_update_normalization ppo_config test_sac_trainer_update_normalization sac_config poca_config test_initialization_seed test_start_learning_trains_until_max_steps_then_saves basic_trainer_controller trainer_controller_with_take_step_mocks test_advance_adds_experiences_to_trainer_and_trains trainer_controller_with_start_learning_mocks test_start_learning_trains_forever_if_no_train_model test_initialize_ppo_trainer test_load_config_invalid_yaml test_load_config_missing_file test_handles_no_config_provided dummy_config test_load_config_valid_yaml test_setup_init_path test_existing_directories test_sanitize_run_options test_globaltrainingstatus test_model_management StatsMetaDataTest test_trajectory_to_agentbuffer test_obsutil_group_from_buffer np_zeros_no_float64 np_array_no_float64 _check_no_float64 np_ones_no_float64 create_action_model test_get_dists test_sample_action test_get_probs_and_entropy test_get_onnx_deterministic_tensors test_deterministic_sample_action test_to_flat test_agent_action_group_from_buffer test_slice test_all_masking test_multi_head_attention_initialization test_predict_minimum_training test_zero_mask_layer test_predict_closest_training test_multi_head_attention_masking test_bcmodule_rnn_update test_bcmodule_linear_lr_update test_bcmodule_update test_bcmodule_constant_lr_update test_bcmodule_dc_visual_update create_bc_module test_bcmodule_defaults assert_stats_are_float test_bcmodule_rnn_dc_update test_predict_with_condition test_conditional_layer_initialization test_valueheads test_multi_categorical_distribution test_gaussian_dist_instance test_categorical_dist_instance test_tanh_gaussian_dist_instance test_gaussian_distribution test_vector_encoder test_visual_encoder test_normalizer compare_models test_visual_encoder_trains test_process_trajectory test_publish_queue test_load_and_set dummy_config test_resume test_hybrid_visual_ppo test_hybrid_recurrent_sac test_hybrid_visual_sac test_hybrid_sac test_hybrid_recurrent_ppo test_hybrid_ppo test_lstm_layer test_lstm_class test_initialization_layer test_layer_norm test_swish test_networkbody_visual test_multinetworkbody_lstm test_multinetworkbody_vector test_actor_critic test_valuenetwork test_networkbody_vector test_multinetworkbody_num_agents test_networkbody_lstm test_multinetworkbody_visual test_poca_get_value_estimates test_poca_optimizer_update test_poca_optimizer_update_gail dummy_config test_poca_end_episode test_poca_optimizer_update_curiosity create_test_poca_optimizer test_sample_actions create_policy_mock test_policy_evaluate test_evaluate_actions test_step_overflow test_ppo_optimizer_update test_ppo_optimizer_update_curiosity dummy_config create_test_ppo_optimizer test_ppo_optimizer_update_gail test_ppo_get_value_estimates create_sac_optimizer_mock test_sac_update_reward_signals dummy_config test_sac_optimizer_update test_simple_ghost_fails test_recurrent_poca test_gail test_visual_advanced_sac test_visual_poca test_visual_sac test_2d_ppo test_simple_sac test_simple_ghost test_var_len_obs_sac test_var_len_obs_and_goal_poca test_simple_asymm_ghost test_var_len_obs_and_goal_ppo test_gail_visual_ppo test_simple_ppo test_gail_visual_sac test_recurrent_ppo test_recurrent_sac test_simple_asymm_ghost_fails test_visual_advanced_ppo test_visual_ppo test_2d_sac test_simple_poca simple_record test_min_visual_size test_masked_mean test_polynomial_decay test_decayed_value test_soft_update test_create_inputs test_break_into_branches test_invalid_visual_input_size test_actions_to_onehot test_list_to_tensor test_checkpoint_conversion test_register test_load_policy_different_hidden_units test_load_save_optimizer test_load_save_policy _compare_two_optimizers _compare_two_policies test_load_different_reward_provider test_reward_provider_save test_reward_decreases test_construction test_next_state_prediction test_continuous_action_prediction test_factory test_reward test_construction test_factory test_reward_decreases test_construction test_factory test_reward_decreases_vail test_reward_decreases test_construction test_factory create_agent_buffer ActionFlattener ActionLogProbs LogProbsTuple ActionModel DistInstances AgentAction MultiHeadAttention get_zero_entities_mask ResidualSelfAttention EntityEmbedding HyperNetwork ConditionalEncoder ValueHeads CategoricalDistInstance DistInstance DiscreteDistInstance GaussianDistInstance MultiCategoricalDistribution TanhGaussianDistInstance GaussianDistribution Normalizer ResNetBlock conv_output_shape ResNetVisualEncoder FullyConnectedVisualEncoder SimpleVisualEncoder VectorInput SmallVisualEncoder pool_out_shape NatureVisualEncoder Swish lstm_layer Initialization linear_layer LayerNorm LinearEncoder LSTM MemoryModule ModelSerializer TensorNames exporting_to_onnx MultiAgentNetworkBody Actor Critic SimpleActor ValueNetwork GlobalSteps LearningRate SharedActorCritic NetworkBody ObservationEncoder ModelUtils BCModule BaseRewardProvider CuriosityNetwork CuriosityRewardProvider ActionPredictionTuple ExtrinsicRewardProvider GAILRewardProvider DiscriminatorNetwork create_reward_provider RNDNetwork RNDRewardProvider RLTrainer Trainer TrainerFactory main check_coverage main main main run_training run_inference find_executables override_config_file init_venv get_unity_executable_path override_legacy_config_file get_base_path run_standalone_build checkout_csharp_version _override_config_dict undo_git_checkout get_base_output_path test_run_environment test_closing test_run_environment test_closing test_run_environment VerifyVersionCommand ActionTuple BehaviorMapping TerminalStep DecisionSteps DimensionProperty BehaviorSpec _ActionTupleBase ObservationSpec TerminalSteps BaseEnv ObservationType DecisionStep ActionSpec Communicator UnityEnvironment validate_environment_path launch_executable get_platform UnityPolicyException UnityCommunicatorStoppedException UnityObservationException UnityWorkerInUseException UnityException UnityCommunicationException UnityTimeOutException UnitySideChannelException UnityEnvironmentException UnityActionException _set_formatter_for_all_loggers get_logger set_log_level MockCommunicator RpcCommunicator UnityToExternalServicerImplementation OffsetBytesIO _generate_split_indices _process_rank_one_or_two_observation process_pixels behavior_spec_from_proto _check_observations_match_spec _process_maybe_compressed_observation _raise_on_nan_and_inf _process_images_num_channels steps_from_proto _observation_to_np_array _process_images_mapping _get_thread_timer TimerNode merge_gauges hierarchical_timer add_metadata get_timer_tree get_timer_root reset_timers get_timer_stack_for_thread set_gauge timed GaugeNode TimerStack UnityToExternalProtoServicer add_UnityToExternalProtoServicer_to_server UnityToExternalProtoStub BaseRegistryEntry ZipFileWithProgress get_tmp_dir get_local_binary_path_if_exists get_local_binary_path load_local_manifest load_remote_manifest download_and_extract_zip print_progress RemoteRegistryEntry UnityEnvRegistry DefaultTrainingAnalyticsSideChannel EngineConfigurationChannel EngineConfig EnvironmentParametersChannel FloatPropertiesChannel IncomingMessage OutgoingMessage RawBytesChannel SideChannel SideChannelManager StatsAggregationMethod StatsSideChannel test_initialization test_reset test_returncode_to_signal_name test_log_file_path_is_set test_close test_step test_port_defaults test_handles_bad_filename test_check_communication_compatibility test_set_logging_level test_validate_path mock_glob_method test_launch_executable test_validate_path_empty create_registry test_basic_in_registry delete_binaries test_rpc_communicator_close test_rpc_communicator_initialize_timeout test_rpc_communicator_checks_port_on_create test_rpc_communicator_initialize_OK test_rpc_communicator_create_multiple_workers test_rpc_communicator_initialize_callback test_batched_step_result_from_proto_raises_on_nan test_process_pixels test_process_visual_observation_bad_shape test_agent_behavior_spec_from_proto proto_from_steps_and_action test_batched_step_result_from_proto test_action_masking_continuous test_action_masking_discrete_1 test_mismatch_observations_raise_in_step_result_from_proto generate_list_agent_proto generate_uncompressed_proto_obs test_batched_step_result_from_proto_raises_on_infinite generate_compressed_proto_obs test_process_pixels_multi_png proto_from_steps test_vector_observation generate_compressed_proto_obs_with_mapping test_action_masking_discrete generate_compressed_data test_process_visual_observation test_action_masking_discrete_2 test_process_visual_observation_padded_channels test_process_pixels_gray test_process_visual_observation_grayscale test_set_action_single_agent test_set_action_multi_agent test_raw_bytes test_int_channel test_message_float_list IntChannel test_engine_configuration test_message_bool test_message_string test_float_properties test_environment_parameters test_message_int32 test_stats_channel test_message_float32 test_decision_steps test_specs test_terminal_steps test_empty_terminal_steps test_action_generator test_empty_decision_steps test_timers decorated_func ExampleStatsWriter get_example_stats_writer test_register_stats_writers remove_trailing_whitespace hash_file table_line ReleaseInfo validate_packages main NonTrivialPEP420PackageFinder main test_release_pattern get_python_package_version get_release_tag test_pip_pattern check_file main check_all_files git_ls_files update_pip_install_line set_academy_version_string _escape_non_none extract_version_string print_release_tag_commands check_versions set_package_version set_version set_extension_package_version MagicMock create_mock_vector_steps UnityToGymWrapper sample create_mock_group_spec setup_mock_unityenvironment step MagicMock create_mock_vector_steps UnityToGymWrapper create_mock_group_spec setup_mock_unityenvironment MagicMock create_mock_vector_steps UnityToGymWrapper create_mock_group_spec setup_mock_unityenvironment MagicMock create_mock_vector_steps UnityToGymWrapper sample reset create_mock_group_spec setup_mock_unityenvironment range append MagicMock create_mock_vector_steps UnityToGymWrapper sample create_mock_group_spec setup_mock_unityenvironment step MagicMock create_mock_vector_steps UnityToGymWrapper sample reset create_mock_group_spec setup_mock_unityenvironment step MagicMock create_mock_vector_steps UnityToGymWrapper sample reset create_mock_group_spec setup_mock_unityenvironment step create_observation_specs_with_shapes tuple create_discrete create_continuous range array range BehaviorMapping checkpoint_settings load debug warning plugin_func _get_num_available_cpus _read_in_integer_file get_distribution FloatTensor debug set_default_tensor_type device add_argument_group add_argument ArgumentParser resequence_and_append obs reset_agent continuous_actions vector_actions_deprecated steps_from_proto AgentBuffer append discrete_actions array enumerate make_demo_buffer observation_specs load_demonstration zip enumerate isdir isfile get_demo_files write SerializeToString _EncodeVarint len isdir join _validate_init_full_path init_path items parse_args start_learning join save_state join join train_model seed API_VERSION load_model num_areas print debug run_training dumps randint set_log_level __version__ warning add_timer_metadata as_dict DEBUG INFO get_version_string parse_command_line run_cli add_argument ArgumentParser from_dict experiment_config_path load_config fields_dict update items check_and_structure items structure register_structure_hook unstructure defaultdict dict_to_trainerdict register_structure_hook register_unstructure_hook DefaultTrainerDict structure RLock get_timer_root reset_timers put _send_response EngineConfig StepResponse env_factory behavior_specs _generate_all_results environment_initialized set_log_level apply get_and_reset_stats set_actions append StatsSideChannel set_configuration training_started EngineConfigurationChannel payload BEHAVIOR_SPECS env_action STEP EnvironmentParametersChannel items EnvironmentResponse isinstance reset RESET TrainingAnalyticsSideChannel step get update items copy MemorySettings structure to_settings items isinstance print items pop dict pop items get print set add append keys range len add_argument ArgumentParser get pop unstructure print convert_behaviors convert_samplers_and_curriculum convert_samplers output_config_path curriculum remove_nones print write_to_yaml_file convert sampler trainer_config_path parse_args size range reversed zeros_like size range reversed zeros_like append discount_rewards print EnvironmentParameterManager append ObservationSpec enumerate len arange ones is_discrete BehaviorSpec shape append array LogProbsTuple ActionTuple continuous_size ones is_discrete AgentExperience shape AgentStatus discrete_size append range pop make_fake_trajectory to_agentbuffer observation_specs create_observation_specs_with_shapes int tuple create_discrete BehaviorSpec create_continuous zeros Mock ActionInfo Mock _create_action_info agent_id publish_trajectory_queue range create_mock_steps steps AgentProcessor empty create_mock_policy add_experiences Mock _create_action_info agent_id publish_trajectory_queue range create_mock_steps steps AgentProcessor empty create_mock_policy add_experiences Mock assert_has_calls ActionInfo publish_trajectory_queue range call create_mock_steps append AgentProcessor empty create_mock_policy add_experiences Mock assert_has_calls ActionInfo end_episode publish_trajectory_queue range call create_mock_steps append AgentProcessor empty create_mock_policy add_experiences AgentManager create_mock_policy Mock get_nowait AgentManagerQueue put Mock assert_any_call remove record_environment_stats AgentManager add_writer StatsReporter write_stats flatten list range len append array range AgentBuffer resequence_and_append get_batch construct_fake_buffer assert_array make_mini_batch AgentBuffer reset_agent array values padded_to_batch append AgentBufferField range enumerate resequence_and_append sample_mini_batch construct_fake_buffer AgentBuffer resequence_and_append construct_fake_buffer AgentBuffer resequence_and_append construct_fake_buffer AgentBuffer truncate values list BytesIO construct_fake_buffer save_to_file AgentBuffer keys load_from_file safe_load convert_behaviors safe_load convert enumerate remove_nones load_demonstration demo_to_buffer dirname abspath load_demonstration demo_to_buffer dirname abspath dirname abspath dirname abspath mock_open BytesIO DemonstrationMetaProto write_delimited from_dict safe_load curriculum from_dict safe_load curriculum from_dict safe_load curriculum EnvironmentParameterManager environment_parameters from_dict safe_load EnvironmentParameterManager environment_parameters clear safe_load MagicMock append parse_command_line clear parse_command_line parse_command_line TrainerSettings FakeTrainer set_is_policy_updating end_episode create_rl_trainer values items construct_fake_buffer _clear_update_buffer create_rl_trainer Mock create_rl_trainer set_is_policy_updating subscribe_trajectory_queue advance put make_fake_trajectory publish_policy_queue AgentManagerQueue add_policy get_nowait range Mock assert_has_calls create_rl_trainer subscribe_trajectory_queue summary_freq checkpoint_interval put make_fake_trajectory publish_policy_queue advance AgentManagerQueue add_policy get_nowait range Mock _append_to_update_buffer create_rl_trainer set_is_policy_updating to_agentbuffer subscribe_trajectory_queue make_fake_trajectory publish_policy_queue _process_trajectory AgentManagerQueue add_policy range items items isinstance zip RunOptions check_if_different TrainerSettings RunOptions set_config_specified deep_update_dict structure structure structure structure RunOptions from_dict check_dict_is_at_least prioritize_resume_init safe_load as_dict EnvironmentSettings from_dict check_if_different structure from_dict set_config_specified RunOptions loads dumps clear assert_called_once_with Mock assert_has_calls get_stats_summaries add_stat add_writer StatsReporter float range write_stats clear Mock add_property add_writer StatsReporter assert_called_once_with record_environment_stats get_stats_summaries AgentManager StatsReporter range write_stats sleep TensorboardWriter StatsSummary write_stats RunOptions check_environment_trains close simple_env_factory SubprocessEnvManager RunOptions close SubprocessEnvManager RunOptions close SubprocessEnvManager assert_called_once_with TorchSettings set_torch_config behaviors subscribe_trajectory_queue setup_test_behavior_specs put make_fake_trajectory add_policy from_name_behavior_id AgentManagerQueue generate TrainerFactory brain_name create_policy behaviors subscribe_trajectory_queue setup_test_behavior_specs put make_fake_trajectory add_policy from_name_behavior_id AgentManagerQueue generate TrainerFactory brain_name create_policy behaviors subscribe_trajectory_queue setup_test_behavior_specs put make_fake_trajectory add_policy from_name_behavior_id AgentManagerQueue generate TrainerFactory brain_name create_policy GhostController MagicMock GhostController TrainerController MagicMock assert_called_with MagicMock start_learning assert_called_once MagicMock assert_not_called start_learning assert_called_once MagicMock MagicMock assert_called_once MagicMock advance add assert_not_called behaviors ppo_dummy_config behaviors TrainerFactory set_config_specified generate _load_config StringIO join validate_existing_directories mkdir behaviors from_dict join format setup_init_path write safe_load mkdir from_dict safe_load _sanitize_run_options _hash join set_parameter_state LESSON_NUM load_state NOTAREALKEY get_parameter_state save_state join time add_checkpoint set_parameter_state CHECKPOINTS track_final_checkpoint ModelCheckpoint items to_agentbuffer add set make_fake_trajectory from_buffer AgentBuffer append range enumerate extract_stack filename get __old_np_array _check_no_float64 get _check_no_float64 __old_np_zeros get __old_np_ones _check_no_float64 ones tuple ActionSpec ActionModel ones discrete create_action_model _get_dists create_action_model ones discrete_list _sample_action _get_dists ones _sample_action create_action_model _get_dists create_action_model all_discrete_list ones _get_probs_and_entropy tolist discrete_list GaussianDistInstance AgentAction zip zeros tensor DistInstances ones get_action_out create_action_model append group_from_buffer range AgentBuffer slice tensor AgentAction tensor AgentAction to_flat MultiHeadAttention forward ones MultiHeadAttention ones zeros forward range get_zero_entities_mask generate_input_helper linear_layer rand zero_grad forward EntityEmbedding seed list Adam range entity_embeddings l_layer mean ResidualSelfAttention manual_seed backward reshape add_self_embedding parameters step linear_layer rand zero_grad forward EntityEmbedding seed list Adam range get_zero_entities_mask entity_embeddings l_layer mean ResidualSelfAttention manual_seed item backward print reshape add_self_embedding parameters step entity_embedding rand zero_grad EntityEmbedding seed list Adam expand transformer append CrossEntropyLoss cat range get_zero_entities_mask l_layer ResidualSelfAttention manual_seed item backward print parameters LinearEncoder randint step loss TrainerSettings BCModule TorchPolicy items BehavioralCloningSettings create_bc_module create_mock_3dball_behavior_specs update create_mock_3dball_behavior_specs BehavioralCloningSettings create_bc_module assert_stats_are_float update create_mock_3dball_behavior_specs BehavioralCloningSettings current_lr create_bc_module assert_stats_are_float update MagicMock create_mock_3dball_behavior_specs BehavioralCloningSettings current_lr create_bc_module update create_mock_3dball_behavior_specs BehavioralCloningSettings create_bc_module assert_stats_are_float update create_mock_banana_behavior_specs BehavioralCloningSettings create_bc_module assert_stats_are_float update create_mock_banana_behavior_specs BehavioralCloningSettings create_bc_module assert_stats_are_float ones forward ConditionalEncoder linear_layer rand zero_grad seed list Adam sum conditional_enc range detach ConditionalEncoder l_layer mean item manual_seed float backward print parameters step ones value_heads ValueHeads gauss_dist backward ones tolist zero_grad Adam range parameters shape manual_seed zeros step mse_loss log_prob GaussianDistribution gauss_dist zip backward ones all_log_prob create_test_prob zero_grad Adam tolist parameters manual_seed MultiCategoricalDistribution tensor step range enumerate ones tolist GaussianDistInstance manual_seed sample zeros ones manual_seed sample TanhGaussianDistInstance range zeros CategoricalDistInstance log_prob manual_seed sample tensor range items zip Normalizer update copy_from tolist tensor vector_encoder Mock ones update_normalization VectorInput assert_called_with copy_normalization ones vis_class enc vis_class backward zero_grad Adam mean parameters manual_seed step range cat setup_test_behavior_specs load_weights zip assert_array_equal get_weights PPOTrainer create_policy get_policy GhostController GhostTrainer save_model zip PPOTrainer as_posix setup_test_behavior_specs behavior_id from_name_behavior_id add_policy assert_array_equal get_weights brain_name create_policy GhostController GhostTrainer PPOTrainer subscribe_trajectory_queue setup_test_behavior_specs put advance make_fake_trajectory from_name_behavior_id AgentManagerQueue add_policy brain_name create_policy copy_buffer_fields GhostController GhostTrainer PPOTrainer simulate_rollout get_nowait setup_test_behavior_specs _swap_snapshots advance publish_policy_queue from_name_behavior_id AgentManagerQueue add_policy brain_name create_policy network_settings evolve check_environment_trains SimpleEnvironment hyperparameters hyperparameters evolve SimpleEnvironment check_environment_trains network_settings evolve check_environment_trains MemoryEnvironment hyperparameters hyperparameters evolve SimpleEnvironment check_environment_trains hyperparameters evolve SimpleEnvironment check_environment_trains network_settings evolve check_environment_trains MemoryEnvironment hyperparameters Swish Tensor mul sigmoid linear_layer manual_seed lstm_layer named_parameters manual_seed ones LSTM lstm manual_seed rand LayerNorm manual_seed create_observation_specs_with_shapes backward ones tolist zero_grad Adam parameters shape manual_seed networkbody step mse_loss range NetworkBody NetworkSettings create_observation_specs_with_shapes backward ones tolist zero_grad Adam parameters shape manual_seed networkbody step mse_loss range NetworkBody NetworkSettings create_observation_specs_with_shapes backward ones tolist zero_grad Adam parameters shape manual_seed networkbody step mse_loss NetworkBody NetworkSettings create_observation_specs_with_shapes MultiAgentNetworkBody backward ones tuple tolist zero_grad Adam mse_loss parameters shape manual_seed networkbody step ActionSpec range NetworkSettings create_observation_specs_with_shapes MultiAgentNetworkBody backward ones tuple tolist zero_grad Adam mse_loss parameters shape manual_seed networkbody step ActionSpec range NetworkSettings create_observation_specs_with_shapes MultiAgentNetworkBody backward ones tuple tolist zero_grad Adam mse_loss parameters shape manual_seed networkbody step ActionSpec range NetworkSettings create_observation_specs_with_shapes backward ones tolist zero_grad Adam parameters ValueNetwork manual_seed value_net step range values NetworkSettings create_observation_specs_with_shapes ones tuple discrete_list get_action_and_stats critic_pass SimpleActor ValueNetwork tensor SharedActorCritic ActionSpec NetworkSettings create_observation_specs_with_shapes MultiAgentNetworkBody tuple manual_seed networkbody ActionSpec NetworkSettings evolve TorchPOCAOptimizer TorchPolicy setup_test_behavior_specs update ENVIRONMENT_REWARDS simulate_rollout MEMORY behavior_spec copy_buffer_fields create_test_poca_optimizer items to_agentbuffer next_group_obs make_fake_trajectory get_trajectory_and_baseline_value_estimates next_obs create_test_poca_optimizer update create_test_poca_optimizer simulate_rollout behavior_spec copy_buffer_fields MEMORY update ones_like simulate_rollout behavior_spec poca_dummy_config copy_buffer_fields create_test_poca_optimizer TrainerSettings create_observation_specs_with_shapes create_policy values end_episode create_discrete subscribe_trajectory_queue BehaviorSpec put advance make_fake_trajectory publish_policy_queue from_name_behavior_id AgentManagerQueue add_policy POCATrainer TorchPolicy setup_test_behavior_specs TrainerSettings list evaluate agent_id create_steps_from_behavior_spec behavior_spec create_policy_mock TrainerSettings observation_specs simulate_rollout continuous_size from_buffer list_to_tensor behavior_spec create_policy_mock unsqueeze discrete_size evaluate_actions len TrainerSettings sample_actions observation_specs simulate_rollout from_buffer list_to_tensor behavior_spec create_policy_mock unsqueeze len TrainerSettings create_policy_mock set_step increment_step evolve TorchPolicy setup_test_behavior_specs TorchPPOOptimizer update ENVIRONMENT_REWARDS simulate_rollout behavior_spec create_test_ppo_optimizer copy_buffer_fields MEMORY update simulate_rollout behavior_spec create_test_ppo_optimizer copy_buffer_fields MEMORY update ones_like simulate_rollout ppo_dummy_config behavior_spec create_test_ppo_optimizer copy_buffer_fields items get_trajectory_value_estimates to_agentbuffer make_fake_trajectory create_test_ppo_optimizer next_obs values TorchSACOptimizer TorchPolicy setup_test_behavior_specs update simulate_rollout create_sac_optimizer_mock behavior_spec manual_seed create_sac_optimizer_mock behavior_spec simulate_rollout update_reward_signals check_environment_trains evolve MultiAgentEnvironment hyperparameters evolve MultiAgentEnvironment check_environment_trains network_settings evolve check_environment_trains hyperparameters MultiAgentEnvironment network_settings evolve check_environment_trains MemoryEnvironment hyperparameters MultiAgentEnvironment check_environment_trains evolve SimpleEnvironment hyperparameters evolve SimpleEnvironment check_environment_trains hyperparameters evolve SimpleEnvironment check_environment_trains network_settings evolve check_environment_trains SimpleEnvironment hyperparameters network_settings evolve check_environment_trains SimpleEnvironment hyperparameters network_settings evolve check_environment_trains MemoryEnvironment hyperparameters check_environment_trains evolve SimpleEnvironment hyperparameters evolve SimpleEnvironment check_environment_trains hyperparameters evolve SimpleEnvironment check_environment_trains hyperparameters evolve SimpleEnvironment check_environment_trains network_settings evolve check_environment_trains SimpleEnvironment hyperparameters network_settings evolve check_environment_trains MemoryEnvironment hyperparameters check_environment_trains evolve SimpleEnvironment SelfPlaySettings check_environment_trains evolve SimpleEnvironment SelfPlaySettings check_environment_trains evolve SimpleEnvironment SelfPlaySettings check_environment_trains evolve SimpleEnvironment SelfPlaySettings evolve check_environment_trains SimpleEnvironment BehavioralCloningSettings simple_record evolve check_environment_trains SimpleEnvironment BehavioralCloningSettings hyperparameters simple_record evolve check_environment_trains SimpleEnvironment BehavioralCloningSettings hyperparameters simple_record enc_func ones get_encoder_for_type _check_resolution_for_encoder forward create_observation_specs_with_shapes create_input_processors append sum range enumerate get_value CONSTANT LINEAR DecayedValue zip polynomial_decay zip list_to_tensor asarray tensor break_into_branches enumerate tensor actions_to_onehot zip masked_mean tensor bool T TestModule soft_update TrainerSettings Mock TorchModelSaver create_policy_mock register TrainerSettings join TorchModelSaver initialize_or_load create_policy_mock set_step save_checkpoint register _compare_two_policies TrainerSettings join TorchModelSaver initialize_or_load parameters create_policy_mock set_step save_checkpoint zip register NetworkSettings TrainerSettings join TorchModelSaver initialize_or_load create_policy_mock set_step HyperparametersClass save_checkpoint OptimizerClass _compare_two_optimizers register assert_array_equal obs default_device all_discrete_tensor create_steps_from_behavior_spec behavior_spec unsqueeze _extract_masks to_numpy to assert_array_equal make_fake_trajectory values zip TrainerSettings join TorchModelSaver create_policy_mock save_checkpoint register TrainerSettings join items hasattr create_agent_buffer TorchModelSaver initialize_or_load parameters create_policy_mock set_step HyperparametersClass save_checkpoint get_modules zip OptimizerClass behavior_spec register keys TrainerSettings join TorchModelSaver initialize_or_load create_policy_mock set_step HyperparametersClass save_checkpoint OptimizerClass register CuriosityRewardProvider CuriositySettings CURIOSITY create_reward_provider CuriositySettings seed update create_agent_buffer CuriosityRewardProvider manual_seed range CuriositySettings seed update create_agent_buffer CuriosityRewardProvider item manual_seed tensor range CuriositySettings seed update create_agent_buffer CuriosityRewardProvider mean manual_seed to_numpy float range CuriositySettings ExtrinsicRewardProvider RewardSignalSettings EXTRINSIC RewardSignalSettings evaluate create_agent_buffer set RewardSignalSettings ExtrinsicRewardProvider GAILRewardProvider GAILSettings GAIL GAILSettings create_reward_provider GAILSettings GAIL seed update create_reward_provider GAILSettings create_agent_buffer manual_seed GAIL range RNDSettings RNDRewardProvider RNDSettings RND RNDRewardProvider RNDSettings items continuous ones AgentBuffer discrete random_action append zeros range enumerate floor data Linear LSTM add_ range named_parameters local Lock frozenset get rcls print join exit walk float check_coverage init_venv add_argument ArgumentParser split get_base_path run_standalone_build exit int time join init_venv override_config_file print makedirs run_inference copy get_base_path rename dirname run_standalone_build abspath run checkout_csharp_version exists get_base_output_path find_executables join time get print run python run_training csharp exists get join move get_unity_executable_path print dirname run get_base_output_path makedirs join X_OK frozenset splitext append access walk print check_call call check_call check_call _override_config_dict values items isinstance update values check_call get_steps UnityEnvironment print step reset abs str UnityToGymWrapper sample range reset UnityEnvironment close UnityToGymWrapper format EngineConfigurationChannel observation_specs set_configuration_parameters set_actions any random_action len join basename replace glob debug getcwd set normpath isfile validate_environment_path debug setFormatter getLogger addHandler StreamHandler add Formatter setLevel Formatter setLevel _set_formatter_for_all_loggers setFormatter vector_action_size_deprecated action_spec observations num_continuous_actions tuple ObservationSpec append ActionSpec OffsetBytesIO original_tell index append array transpose mean stack zip append enumerate mean reshape concatenate tuple shape data list compressed_data reshape process_pixels shape array compressed_channel_mapping shape cast mean isnan reshape _raise_on_nan_and_inf shape observation_specs _generate_split_indices _process_rank_one_or_two_observation ones _process_maybe_compressed_observation len _raise_on_nan_and_inf astype any split discrete_branches append bool sum array enumerate range len get_ident TimerStack perf_counter push items merge reset method_handlers_generic_handler add_generic_rpc_handlers download_and_extract_zip get_local_binary_path_if_exists debug range join get_tmp_dir glob rmtree hexdigest join chmod makedirs uuid4 join int str remove get_tmp_dir exists chmod print glob rmtree urlopen print_progress hexdigest print int min max uuid4 join str get_tmp_dir load_local_manifest urlopen UUID UnityEnvironment close MockCommunicator UnityEnvironment close MockCommunicator _executable_args close index MockCommunicator UnityEnvironment get_steps obs observation_specs close reset MockCommunicator zip UnityEnvironment len get_steps empty_action obs ActionTuple continuous zip observation_specs step close MockCommunicator discrete set_actions UnityEnvironment len UnityEnvironment close MockCommunicator validate_environment_path validate_environment_path launch_executable PermissionError set_log_level rmtree get_tmp_dir RemoteRegistryEntry register UnityEnvRegistry create_registry make close reset step range delete_binaries close RpcCommunicator close RpcCommunicator close RpcCommunicator initialize RpcCommunicator UnityInputProto assert_called assert_called RpcCommunicator UnityInputProto assert_called RpcCommunicator UnityInputProto list extend ObservationProto AgentInfoProto append prod range len fromarray list uint8 BytesIO bytes concatenate astype shape save zeros range ObservationProto generate_compressed_data extend shape ObservationProto generate_compressed_data extend shape ObservationProto shape tolist extend generate_uncompressed_proto_obs obs concatenate tolist agent_id ObservationProto AgentInfoProto append action_mask proto_from_steps extend AgentActionProto append range generate_compressed_data process_pixels rand generate_compressed_data process_pixels rand generate_compressed_data process_pixels rand create_observation_specs_with_shapes generate_list_agent_proto _process_rank_one_or_two_observation enumerate generate_compressed_proto_obs generate_compressed_proto_obs_with_mapping rand _process_maybe_compressed_observation extend AgentInfoProto generate_compressed_proto_obs generate_compressed_proto_obs_with_mapping rand _process_maybe_compressed_observation extend mean AgentInfoProto generate_compressed_proto_obs_with_mapping rand _process_maybe_compressed_observation extend take AgentInfoProto generate_compressed_proto_obs rand AgentInfoProto extend create_observation_specs_with_shapes list sort agent_id BehaviorSpec steps_from_proto create_continuous generate_list_agent_proto range create_observation_specs_with_shapes create_continuous BehaviorSpec append generate_list_agent_proto create_observation_specs_with_shapes create_discrete BehaviorSpec steps_from_proto generate_list_agent_proto action_mask create_observation_specs_with_shapes create_discrete BehaviorSpec steps_from_proto generate_list_agent_proto action_mask create_observation_specs_with_shapes create_discrete BehaviorSpec steps_from_proto generate_list_agent_proto action_mask create_observation_specs_with_shapes BehaviorSpec steps_from_proto create_continuous generate_list_agent_proto action_mask BrainParametersProto behavior_spec_from_proto extend create_observation_specs_with_shapes generate_list_agent_proto create_continuous BehaviorSpec create_observation_specs_with_shapes generate_list_agent_proto create_continuous BehaviorSpec get_steps make EngineConfigurationChannel ActionTuple set_action_for_agent ones set_configuration_parameters agent_id close reset add_continuous step range get_steps make EngineConfigurationChannel ActionTuple ones set_configuration_parameters close reset set_actions add_continuous step range generate_side_channel_messages process_side_channel_message send_int IntChannel FloatPropertiesChannel process_side_channel_message generate_side_channel_messages get_property set_property uuid4 process_side_channel_message generate_side_channel_messages RawBytesChannel send_raw_data get_and_clear_received_messages len buffer read_bool append write_bool IncomingMessage range OutgoingMessage buffer write_int32 read_int32 IncomingMessage OutgoingMessage IncomingMessage write_float32 buffer read_float32 OutgoingMessage read_string write_string buffer IncomingMessage OutgoingMessage IncomingMessage buffer OutgoingMessage read_float32_list write_float32_list set_configuration channel_id EngineConfigurationChannel generate_side_channel_messages process_side_channel_message set_configuration_parameters RawBytesChannel read_float32 read_int32 IncomingMessage get_and_clear_received_messages default_config channel_id generate_side_channel_messages process_side_channel_message read_string set_float_parameter RawBytesChannel read_float32 read_int32 IncomingMessage EnvironmentParametersChannel IncomingMessage write_float32 write_string buffer write_int32 get_and_reset_stats on_message_received StatsSideChannel OutgoingMessage DecisionSteps action_mask empty BehaviorSpec TerminalSteps empty BehaviorSpec create_continuous ActionSpec create_discrete continuous print create_discrete discrete random_action create_continuous enumerate set_gauge TimerStack print RunOptions register_stats_writer_plugins md5 exists pop join append append join is_main print find_packages find validate_packages join remove replace frozenset endswith set add walk print print sub search group git_ls_files get_release_tag get_python_package_version check_all_files compile join print extract_version_string set values join format set_academy_version_string print set_package_version set_extension_package_version enumerate split print | Unity-Technologies/ml-agents | 1,091 |
UnofficialJuliaMirror/GaussianProcesses.jl-891a1506-143c-57d2-908e-e1f8e92e6de9 | ['gaussian processes'] | ['GaussianProcesses.jl: A Nonparametric Bayes package for the Julia Language'] | perf/benchmarks/benchmark_GPy.py perf/benchmarks/benchmark GPflow.py | # GaussianProcesses.jl [](https://travis-ci.org/STOR-i/GaussianProcesses.jl) [](https://ci.appveyor.com/project/STOR-i/gaussianprocesses-jl) [](https://coveralls.io/github/STOR-i/GaussianProcesses.jl?branch=master) [](https://codecov.io/gh/STOR-i/GaussianProcesses.jl) [](http://STOR-i.github.io/GaussianProcesses.jl/latest) A Gaussian Processes package for Julia. This package is still under development. If you have any suggestions to improve the package, or if you've noticed a bug, then please post an [issue](https://github.com/STOR-i/GaussianProcesses.jl/issues/new) for us and we'll get to it as quickly as we can. Pull requests are also welcome. ## Citing GaussianProcesses.jl To cite GaussianProcesses.jl, please reference the [arXiv paper](https://arxiv.org/abs/1812.09064). Sample Bibtex is given below: | 1,092 |
UnofficialJuliaMirrorSnapshots/GaussianProcesses.jl-891a1506-143c-57d2-908e-e1f8e92e6de9 | ['gaussian processes'] | ['GaussianProcesses.jl: A Nonparametric Bayes package for the Julia Language'] | perf/benchmarks/benchmark_GPy.py perf/benchmarks/benchmark GPflow.py | # GaussianProcesses.jl [](https://travis-ci.org/STOR-i/GaussianProcesses.jl) [](https://ci.appveyor.com/project/STOR-i/gaussianprocesses-jl) [](https://coveralls.io/github/STOR-i/GaussianProcesses.jl?branch=master) [](https://codecov.io/gh/STOR-i/GaussianProcesses.jl) [](http://STOR-i.github.io/GaussianProcesses.jl/latest) A Gaussian Processes package for Julia. This package is still under development. If you have any suggestions to improve the package, or if you've noticed a bug, then please post an [issue](https://github.com/STOR-i/GaussianProcesses.jl/issues/new) for us and we'll get to it as quickly as we can. Pull requests are also welcome. ## Citing GaussianProcesses.jl To cite GaussianProcesses.jl, please reference the [arXiv paper](https://arxiv.org/abs/1812.09064). Sample Bibtex is given below: | 1,093 |
UsmannK/TABOR | ['anomaly detection'] | ['TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems'] | tabor/test_snooper.py tabor/gtsrb_dataset.py tabor/__init__.py tabor/snooper.py tabor/eval_badnet.py tabor/train_badnet.py test_poison evaluate_model gtsrb_signname GTSRBDataset apply_poison gen_poison Snooper train build_model subplots gtsrb_signname argmax show squeeze GTSRBDataset set_xlabel imshow append expand_dims range format apply_poison copy choice load_weights gen_poison print subplots_adjust test_images len subplots gtsrb_signname argmax test_labels show set_xlabel GTSRBDataset imshow append expand_dims range predict format classification_report choice load_weights compile evaluate print subplots_adjust tqdm test_images len fill empty array resize Sequential add Dense MaxPooling2D summary Conv2D Flatten train_labels test_labels show GTSRBDataset ylabel ylim legend evaluate_model format plot build_model compile evaluate xlabel print train_images ModelCheckpoint test_images fit | # TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems ## About This repository contains partial code implementation of the paper (https://arxiv.org/pdf/1908.01763.pdf). Currently this repo has been written to work on the GTSRB dataset with the 6 Conv + 2 MaxPooling CNN from the original paper. ## Dependencies This codebase is written in tensorflow and tf.keras and has been tested on tensorflow 1.14 and python 3.6.8 ## Getting Started 1. Clone the TABOR repository ```shell git clone https://github.com/UsmannK/TABOR.git ``` | 1,094 |
VCL3D/DeepPanoramaLighting | ['mixed reality'] | ['Deep Lighting Environment Map Estimation from Spherical Panoramas'] | loaders/Illum_loader.py inference.py loaders/autoenc_ldr2hdr.py helpers/sh_functions.py main parse_arguments evaluate getCoeeficientsMatrix shReconstructSignal P sh_lmax_from_terms xy2ll deringing shTerms K shEvaluate SH shIndex factorial LDR2HDR weights_init Inference_Data IlluminationModule conv_bn_elu add_argument ArgumentParser basename out_path input_path Inference_Data DataLoader mkdir splitext float enumerate load evaluate print chkpnt_path load_state_dict device LDR2HDR to zeros_like sqrt zeros range shape sqrt isscalar isscalar zeros SH range shIndex int arange reshape shEvaluate xy2ll sh_lmax_from_terms float getCoeeficientsMatrix normal_ __name__ fill_ int | # **Code accompanying the paper "Deep Lighting Environment Map Estimation from Spherical Panoramas", CVPRW 2020** [](https://arxiv.org/abs/2005.08000) [](http://cvpr2020.thecvf.com/) [](https://sites.google.com/view/omnicv-cvpr2020/home) [](https://vcl3d.github.io/DeepPanoramaLighting/) [](https://www.youtube.com/watch?v=ZLn55NbtBZ8) ## TODO - [x] Pre-trained model. - [x] Inference code. - [ ] Training code. | 1,095 |
VDIGPKU/DADA | ['data augmentation'] | ['DADA: Differentiable Automatic Data Augmentation'] | search_relax/networks/shakeshake/shake_resnet.py fast-autoaugment/FastAutoAugment/networks/shakeshake/shakeshake.py fast-autoaugment/FastAutoAugment/networks/__init__.py search_gumbel/operation.py fast-autoaugment/FastAutoAugment/imagenet.py search_gumbel/networks/shakedrop.py fast-autoaugment/FastAutoAugment/train.py search_relax/architect.py fast-autoaugment/FastAutoAugment/genotype.py search_relax/train.py fast-autoaugment/FastAutoAugment/common.py fast-autoaugment/FastAutoAugment/networks/shakeshake/shake_resnet.py fast-autoaugment/FastAutoAugment/networks/wideresnet.py search_relax/random_primitives.py search_relax/train_search.py fast-autoaugment/FastAutoAugment/networks/resnet.py search_gumbel/networks/shakeshake/shake_resnext.py search_relax/dataset.py search_relax/primitives.py search_relax/networks/wideresnet.py search_gumbel/networks/wideresnet.py search_relax/networks/__init__.py fast-autoaugment/FastAutoAugment/search.py search_relax/imagenet.py search_gumbel/networks/pyramidnet.py search_gumbel/architect.py search_gumbel/networks/shakeshake/shake_resnet.py search_relax/model_search.py fast-autoaugment/FastAutoAugment/archive.py search_relax/networks/shakedrop.py search_gumbel/train_search_paper.py search_relax/networks/resnet.py fast-autoaugment/FastAutoAugment/lr_scheduler.py search_relax/networks/shakeshake/shake_resnext.py fast-autoaugment/archive.py search_relax/networks/shakeshake/shakeshake.py search_relax/train_search_paper.py search_gumbel/networks/resnet.py search_relax/utils.py fast-autoaugment/FastAutoAugment/metrics.py search_gumbel/train.py search_relax/operation.py fast-autoaugment/FastAutoAugment/networks/shakedrop.py search_gumbel/primitives.py fast-autoaugment/FastAutoAugment/data.py search_gumbel/train_search.py search_gumbel/imagenet.py search_relax/networks/pyramidnet.py fast-autoaugment/FastAutoAugment/networks/shakeshake/shake_resnext.py search_gumbel/model_search.py fast-autoaugment/FastAutoAugment/augmentations.py fast-autoaugment/FastAutoAugment/networks/pyramidnet.py search_gumbel/dataset.py search_gumbel/networks/shakeshake/shakeshake.py search_gumbel/networks/__init__.py search_gumbel/utils.py autoaug_paper_cifar10 fa_reduced_svhn autoaug2arsaug float_parameter no_duplicates policy_decoder arsaug_policy autoaug_policy int_parameter fa_resnet50_rimagenet remove_deplicates fa_reduced_cifar10 autoaug_paper_cifar10 fa_reduced_svhn autoaug2arsaug float_parameter no_duplicates policy_decoder arsaug_policy autoaug_policy int_parameter fa_resnet50_rimagenet remove_deplicates fa_reduced_cifar10 Rotate Solarize Contrast SamplePairing TranslateY augment_list Brightness ShearX TranslateYAbs Cutout Lighting Invert get_augment Posterize2 AutoContrast TranslateXAbs Posterize Color TranslateX Flip Equalize ShearY CutoutAbs apply_augment Sharpness get_logger add_filehandler CutoutDefault get_dataloaders Augmentation SubsetSampler download_and_extract_tar ImageNet parse_meta extract_tar _splitexts parse_devkit prepare_val_folder prepare_train_folder parse_val_groundtruth adjust_learning_rate_resnet Accumulator cross_entropy_smooth SummaryWriterDummy accuracy _get_path step_w_log train_model eval_tta reproducibility run_epoch train_and_eval conv3x3 BasicBlock PyramidNet Bottleneck ResNet conv3x3 BasicBlock Bottleneck ShakeDropFunction ShakeDrop conv_init conv3x3 WideBasic WideResNet get_model num_class Shortcut ShakeShake ShakeBlock ShakeResNet ShakeBottleNeck ShakeResNeXt _concat Architect num_class AugmentDataset CutoutDefault get_dataloaders SubsetSampler download_and_extract_tar ImageNet parse_meta extract_tar _splitexts parse_devkit prepare_val_folder prepare_train_folder parse_val_groundtruth DifferentiableAugment MixedAugment Network Rotate Solarize Contrast SamplePairing TranslateY augment_list Brightness ShearX TranslateYAbs Cutout Lighting Invert get_augment Posterize2 AutoContrast TranslateXAbs Posterize Color TranslateX Flip Equalize ShearY CutoutAbs apply_augment Sharpness dfs print_genotype infer reproducibility main train print_genotype infer reproducibility main train load Cutout accuracy drop_path create_exp_dir save_checkpoint _data_transforms_cifar10 count_parameters_in_MB save AvgrageMeter conv3x3 BasicBlock PyramidNet Bottleneck ResNet conv3x3 BasicBlock Bottleneck ShakeDropFunction ShakeDrop conv_init conv3x3 WideBasic WideResNet get_model num_class Shortcut ShakeShake ShakeBlock ShakeResNet ShakeBottleNeck ShakeResNeXt _concat Architect num_class AugmentDataset CutoutDefault get_dataloaders SubsetSampler download_and_extract_tar ImageNet parse_meta extract_tar _splitexts parse_devkit prepare_val_folder prepare_train_folder parse_val_groundtruth QFunc DifferentiableAugment MixedAugment Network Rotate Solarize Contrast SamplePairing TranslateY augment_list Brightness ShearX TranslateYAbs Cutout Lighting Invert get_augment Posterize2 AutoContrast TranslateXAbs Posterize Color TranslateX Flip Equalize ShearY CutoutAbs apply_augment Sharpness dfs dfs print_genotype infer reproducibility main train print_genotype infer reproducibility main train load Cutout accuracy drop_path create_exp_dir save_checkpoint _data_transforms_cifar10 count_parameters_in_MB save AvgrageMeter conv3x3 BasicBlock PyramidNet Bottleneck ResNet conv3x3 BasicBlock Bottleneck ShakeDropFunction ShakeDrop conv_init conv3x3 WideBasic WideResNet get_model num_class Shortcut ShakeShake ShakeBlock ShakeResNet ShakeBottleNeck ShakeResNeXt append join add set append augment_list range int int int size min copy uniform rectangle max get_augment clear setFormatter getLogger addHandler StreamHandler setLevel setFormatter addHandler DEBUG setLevel FileHandler autoaug_paper_cifar10 arsaug_policy autoaug_policy DataLoader SVHN fa_reduced_cifar10 list fa_reduced_svhn ConcatDataset DistributedSampler Augmentation len SubsetRandomSampler samples append next CIFAR100 range StratifiedShuffleSplit insert debug CutoutDefault Compose targets eval info CIFAR10 fa_resnet50_rimagenet SubsetSampler isinstance set_preaug ImageNet print Subset split endswith remove dirname join basename download_url extract_tar expanduser parse_meta parse_val_groundtruth join extract_tar join sorted basename move set mkdir zip append splitext topk size t eq mul_ expand_as append sum max cuda LogSoftmax list _iteration print get_original_attribute OrderedDict filter max TrialRunner _trials len get train_and_eval num_class model cuda topk Accumulator squeeze device_count load_state_dict iter append next range CrossEntropyLoss get concatenate policy_decoder eval add_dict load time reporter t get_dataloaders loss_fn get_model numpy model clip_grad_norm_ zero_grad set_description list Accumulator getattr set_postfix get synchronize add_dict info float items backward add_scalar accuracy tqdm parameters loss_fn bool step len num_class SGD DistributedOptimizer warning save device set_device OrderedDict load_state_dict broadcast_parameters run_epoch CrossEntropyLoss range state_dict get product replace GradualWarmupScheduler debug set eval init info CosineAnnealingLR train load reporter set_epoch dict parameters get_dataloaders adjust_learning_rate_resnet isnan get_model local_rank broadcast_optimizer_state add_scalar seed manual_seed_all manual_seed bias xavier_uniform_ weight __name__ constant_ ShakeResNet ResNet WideResNet DataParallel PyramidNet cuda device to local_rank ShakeResNeXt AugmentDataset labels enumerate set_detect_anomaly enumerate info cutout batch_size SGD save reproducibility dataset cuda dataroot seed str print_genotype set_device step exit use_parallel Network CrossEntropyLoss Architect range cutout_length softmax info CosineAnnealingLR float join time learning_rate use_cuda genotype ops_weights clamp infer num_workers magnitudes parameters get_dataloaders model_name count_parameters_in_MB train epochs gpu sample_ops_weights_index model AvgrageMeter clip_grad_norm_ zero_grad sample_probabilities_index cuda iter grad_clip next detach update size avg item sample info enumerate criterion backward accuracy parameters set_augmenting step eval AvgrageMeter set_augmenting Cutout cutout Compose cutout_length append copyfile join save state_dict load_state_dict Variable div_ mul_ bernoulli_ join format basename print copyfile mkdir probabilities_b ops_weights_b | # `DADA: Differentiable Automatic Data Augmentation` Contact us with [email protected], [email protected]. ## Introduction The official code for our ECCV 2020 paper `DADA: Differentiable Automatic Data Augmentation`, which is at least one order of magnitude faster than the state-of-the-art data augmentation (DA) policy search algorithms while achieving very comparable accuracy. The implementation of our training part is based on [fast-autoaugment](https://github.com/kakaobrain/fast-autoaugment). ## License **The project is only free for academic research purposes, but needs authorization for commerce. For commerce permission, please contact [email protected].** ## Citation If you use our code/model, please consider to cite our ECCV 2020 paper **DADA: Differentiable Automatic Data Augmentation** [[arXiv](https://arxiv.org/pdf/2003.03780.pdf)] [[ECCV](https://link.springer.com/chapter/10.1007/978-3-030-58542-6_35)]. | 1,096 |
VITA-Group/L2O-Training-Techniques | ['imitation learning'] | ['Training Stronger Baselines for Learning to Optimize'] | L2O-DM & RNNProp/networks.py L2O-DM & RNNProp/util.py L2O-Scale/l2o-scale-regularize-train/optimizer/coordinatewise_rnn.py L2O-Scale/l2o-scale-regularize-train/metarun.py L2O-Scale/l2o-scale-regularize-train/mnist/mnist_softmax_xla.py L2O-Scale/l2o-scale-regularize-train/optimizer/hierarchical_rnn.py L2O-Scale/learned_optimizer/metatest.py L2O-DM & RNNProp/train_rnnprop.py L2O-Scale/l2o-scale-regularize-train/mnist/fully_connected_feed.py L2O-Scale/l2o-scale-regularize-train/mnist/mnist_with_summaries.py L2O-Scale/l2o-scale-regularize-train/optimizer/utils.py L2O-Scale/learned_optimizer/problems/model_adapter.py L2O-DM & RNNProp/preprocess.py L2O-DM & RNNProp/meta_rnnprop_eval.py L2O-Scale/l2o-scale-regularize-train/problems/model_adapter.py L2O-DM & RNNProp/evaluate_dm.py L2O-Scale/l2o-scale-regularize-train/mnist/input_data.py L2O-Scale/learned_optimizer/problems/problem_spec.py L2O-DM & RNNProp/evaluate_rnnprop.py L2O-Scale/l2o-scale-regularize-train/optimizer/trainable_adam.py L2O-Scale/l2o-scale-regularize-train/metaopt.py L2O-Scale/learned_optimizer/optimizer/coordinatewise_rnn.py L2O-Scale/learned_optimizer/optimizer/rnn_cells.py L2O-Scale/l2o-scale-regularize-train/mnist/mnist.py L2O-Scale/l2o-scale-regularize-train/mt_utils.py L2O-Scale/l2o-scale-regularize-train/optimizer/trainable_optimizer.py L2O-Scale/learned_optimizer/problems/datasets.py L2O-Scale/l2o-scale-regularize-train/problems/problem_sets.py L2O-Scale/learned_optimizer/optimizer/learning_rate_schedule.py L2O-Scale/learned_optimizer/optimizer/utils.py L2O-DM & RNNProp/problems.py L2O-DM & RNNProp/meta_dm_train.py L2O-Scale/l2o-scale-regularize-train/dtata_process.py L2O-Scale/learned_optimizer/metarun.py L2O-Scale/learned_optimizer/optimizer/global_learning_rate.py L2O-DM & RNNProp/train_dm.py L2O-Scale/learned_optimizer/optimizer/trainable_optimizer.py L2O-Scale/learned_optimizer/problems/problem_sets.py L2O-Scale/l2o-scale-regularize-train/optimizer/learning_rate_schedule.py L2O-Scale/l2o-scale-regularize-train/metatest.py L2O-Scale/learned_optimizer/metaopt.py L2O-Scale/l2o-scale-regularize-train/problems/problem_spec.py L2O-Scale/l2o-scale-regularize-train/problems/problem_generator.py L2O-Scale/l2o-scale-regularize-train/optimizer/rnn_cells.py L2O-Scale/learned_optimizer/optimizer/trainable_adam.py L2O-Scale/learned_optimizer/optimizer/hierarchical_rnn.py L2O-Scale/l2o-scale-regularize-train/mnist/__init__.py L2O-Scale/learned_optimizer/problems/problem_generator.py L2O-DM & RNNProp/meta_rnnprop_train.py L2O-DM & RNNProp/vgg16.py L2O-DM & RNNProp/meta.py L2O-DM & RNNProp/data_generator.py L2O-Scale/l2o-scale-regularize-train/optimizer/global_learning_rate.py L2O-Scale/l2o-scale-regularize-train/problems/datasets.py opt_variables_initializer data_loader main main _nested_variable MetaOptimizer _make_nets _get_variables _nested_tuple _nested_assign _wrap_variable_creation _make_with_custom_variables _nested_variable MetaOptimizer _make_nets _get_variables _nested_tuple _nested_assign _wrap_variable_creation _make_with_custom_variables _nested_variable MetaOptimizer _make_nets _get_variables _nested_tuple _nested_assign _wrap_variable_creation _make_with_custom_variables _nested_variable MetaOptimizer _make_nets _get_variables _nested_tuple _nested_assign _wrap_variable_creation _make_with_custom_variables KernelDeepLSTM _get_layer_initializers _update_adam_estimate StandardDeepLSTM _debias_adam_estimate Adam RNNprop _get_initializers save _convert_to_initializer Sgd factory CoordinateWiseDeepLSTM Network LogAndSign Clamp mnist mnist_conv cifar10 simple_multi_optimizer LeNet NAS _xent_loss confocal_microscopy_3d simple square_cos quadratic _maybe_download_cifar10 vgg16_cifar10 ensemble main main get_config run_eval_epoch print_stats run_epoch get_default_net_config fc_layer VGG16 dropout max_pool conv_layer post_process1 post_process2 post_process4 process analysis post_process3 sample_numiter validate test_optimizer sigmoid_weights run_wall_clock_test train_optimizer main register_optimizers main register_optimizers opt_variables_initializer mt_utils do_eval fill_feed_dict run_training placeholder_inputs main _extract_labels _maybe_download _DataSet _extract_images _read32 read_data_sets _dense_to_one_hot inference loss training evaluation main main train CoordinatewiseRNN GlobalLearningRate HierarchicalRNN LearningRateSchedule BiasGRUCell TrainableAdam local_state_variables is_local_state_variable list_mult create_local_state_variable_name TrainableOptimizer flatten_and_sort slice_tensor affine rms_scaling accumulate_sparse_gradients asinh make_finite new_mean_squared update_slices stack_tensor project random_mlp random_binary mnist noisy_parity_class cifar10 batch_indices random random_symmetric Dataset ModelAdapter _make_with_custom_variables _get_variables Branin Bowl Rosenbrock Beale Quadratic LogObjective OutwardSnake OneHotSparseSoftmaxRegression MatMulAlgorithm LogSumExp MinMaxWell SparseProblem SumOfQuadratics SparseSoftmaxRegression StyblinskiTang SoftmaxRegression Problem Booth FullyConnected Ackley SoftmaxClassifier ConvNet Saddle matmul_problem_sequence SumTask Rescale init_fixed_variables Problem2D Michalewicz _mesh Norm Matyas DependencyChain make_rosenbrock_loss_and_init ProjectionQuadratic IsotropicQuadratic norm_problems_noisy test_problems mnist_mlp_problems sum_problems adapter_rosenbrock_local dependency_chain_problems sparse_gradient_problems_mlp min_max_well_problems one_hot_sparse_softmax_2_class_sparse_problems projection_quadratic_problems softmax_2_class_problems outward_snake_problems quadratic_problems_noisy log_objective_problems adapter_rosenbrock_worker sum_of_quadratics_problems sparse_softmax_2_class_sparse_problems _test_problem_mlp_scaled_init_mnist quadratic_problems bowl_problems_noisy optimization_test_problems_noisy cifar10_conv_problems fully_connected_random_2_class_problems _test_problem_mlp_scaled_init_large sparse_gradient_problems rescale_problems sum_problems_noisy optimization_test_problems softmax_2_class_problems_noisy mnist_conv_problems _test_problem_mlp_scaled_init_small norm_problems matmul_problems bowl_problems quadratic_problems_large Spec sample_numiter validate test_optimizer sigmoid_weights run_wall_clock_test train_optimizer main register_optimizers main register_optimizers CoordinatewiseRNN GlobalLearningRate HierarchicalRNN LearningRateSchedule BiasGRUCell TrainableAdam local_state_variables is_local_state_variable create_local_state_variable_name TrainableOptimizer flatten_and_sort slice_tensor affine rms_scaling accumulate_sparse_gradients asinh make_finite new_mean_squared update_slices stack_tensor project random_mlp random_binary mnist noisy_parity_class cifar10 random random_symmetric Dataset ModelAdapter _make_with_custom_variables _get_variables Branin Bowl Rosenbrock Beale Quadratic LogObjective OutwardSnake OneHotSparseSoftmaxRegression MatMulAlgorithm LogSumExp MinMaxWell SparseProblem SumOfQuadratics SparseSoftmaxRegression StyblinskiTang SoftmaxRegression Problem Booth FullyConnected Ackley SoftmaxClassifier ConvNet Saddle matmul_problem_sequence SumTask Rescale ConvNetOrig init_fixed_variables Problem2D Michalewicz _mesh Norm Matyas DependencyChain make_rosenbrock_loss_and_init ProjectionQuadratic IsotropicQuadratic norm_problems_noisy test_problems mnist_mlp_problems sum_problems adapter_rosenbrock_local dependency_chain_problems test_mnist_mlp_deeper_problems test_cifar10_conv_problems sparse_gradient_problems_mlp min_max_well_problems one_hot_sparse_softmax_2_class_sparse_problems projection_quadratic_problems softmax_2_class_problems outward_snake_problems quadratic_problems_noisy log_objective_problems adapter_rosenbrock_worker sum_of_quadratics_problems sparse_softmax_2_class_sparse_problems _test_problem_mlp_scaled_init_mnist quadratic_problems bowl_problems_noisy optimization_test_problems_noisy fully_connected_random_2_class_problems test_mnist_mlp_problems test_mnist_conv_problems test_mnist_conv_orig_problems _test_problem_mlp_scaled_init_large sparse_gradient_problems test_mnist_mlp_relu_problems rescale_problems sum_problems_noisy optimization_test_problems softmax_2_class_problems_noisy _test_problem_mlp_scaled_init_small norm_problems matmul_problems bowl_problems quadratic_problems_large Spec _get_beta_accumulators list extend meta_loss TRAINABLE_VARIABLES set_random_seed MetaOptimizer warning get_slot_names seed get_collection num_steps variables_initializer format get_config mkdir ConfigProto optimizer learning_rate minimize print problem path AdamOptimizer output_path beta1 beta2 isinstance isinstance tuple get_variable deque dict dict getattr eval defaultdict get_variables_in_module split ndarray isinstance isinstance _convert_to_initializer sparse_softmax_cross_entropy_with_logits list constant relu MLP reshape load_mnist Sequential labels sigmoid images getattr constant reshape load_mnist labels images getattr join format urlretrieve st_size print extractall stat makedirs read decode_raw string_input_producer uint8 add_queue_runner slice reshape transpose float32 divide cast int32 RandomShuffleQueue _maybe_download_cifar10 QueueRunner FixedLengthRecordReader Sequential div list string_input_producer transpose cast add_queue_runner ConvNet2D MLP slice QueueRunner FixedLengthRecordReader read decode_raw uint8 reshape float32 sigmoid int32 RandomShuffleQueue _maybe_download_cifar10 read decode_raw string_input_producer uint8 add_queue_runner slice reshape transpose float32 div cast int32 RandomShuffleQueue _maybe_download_cifar10 QueueRunner FixedLengthRecordReader read decode_raw string_input_producer uint8 add_queue_runner VGG16 slice reshape transpose float32 divide cast int32 RandomShuffleQueue _maybe_download_cifar10 QueueRunner FixedLengthRecordReader optimizers save_path meta_minimize float32 placeholder data_loader unroll_length if_mt num_mt append if_cl exp assign_func len timer uniform zip append range run append range timer print format log10 mnist mnist_conv cifar10 confocal_microscopy_3d square_cos NAS LeNet simple_multi_optimizer simple quadratic vgg16_cifar10 int conv2d bias_add bias_add matmul print len lstrip split append enumerate print rfind strip lstrip split append enumerate find defaultdict strip append analysis enumerate defaultdict print strip float enumerate print strip append float enumerate sum exp arange exponential round clip time EMPTY_DATASET format Graph print size shuffle replica_device_setter float ps_tasks enumerate len reset_rnn_params print batch_indices mini_batch_size mt_labels run append array range enumerate len size EMPTY_DATASET ConfigProto init_fixed_variables EMPTY_DATASET isinstance gradients size init_variables float32 placeholder int32 objective GlobalLearningRate TrainableAdam CoordinatewiseRNN LearningRateSchedule HierarchicalRNN norm_problems_noisy include_outward_snake_problems include_sparse_softmax_problems mnist_mlp_problems sum_problems include_noisy_softmax_2_class_problems dependency_chain_problems include_cifar10_conv_problems include_one_hot_sparse_softmax_problems include_matmul_problems include_projection_quadratic_problems include_bowl_problems include_fully_connected_random_2_class_problems sparse_gradient_problems_mlp min_max_well_problems one_hot_sparse_softmax_2_class_sparse_problems num_meta_iterations projection_quadratic_problems num_problems softmax_2_class_problems outward_snake_problems include_mnist_conv_problems quadratic_problems_noisy include_softmax_2_class_problems log_objective_problems Spec sum_of_quadratics_problems sparse_softmax_2_class_sparse_problems cell_size quadratic_problems include_noisy_sum_problems include_noisy_quadratic_problems bowl_problems_noisy include_noisy_norm_problems include_sum_of_quadratics_problems optimization_test_problems_noisy include_norm_problems train_dir cifar10_conv_problems include_noisy_optimization_test_problems fully_connected_random_2_class_problems include_large_quadratic_problems include_log_objective_problems register_optimizers cell_cls include_noisy_bowl_problems include_rescale_problems MakeDirs include_optimization_test_problems rescale_problems sparse_gradient_problems train_optimizer include_sum_problems include_min_max_well_problems include_quadratic_problems include_mnist_mlp_problems join include_sparse_gradient_problems sum_problems_noisy optimization_test_problems softmax_2_class_problems_noisy num_cells extend mnist_conv_problems norm_problems matmul_problems include_dependency_chain_problems bowl_problems quadratic_problems_large build realpath enumerate dirname EMPTY_DATASET test_optimizer size int32 float32 placeholder fake_data next_batch batch_size fill_feed_dict batch_size print num_examples float range fake_data input_data_dir read_data_sets log_dir run_training rmtree exists makedirs newbyteorder print zeros arange print print join urlretrieve MakeDirs _maybe_download fake _DataSet dict cast GradientDescentOptimizer Variable scalar minimize in_top_k argmax Session Timeline run ON_1 data_dir matmul int64 cast range close xla equal Variable sparse_softmax_cross_entropy RunMetadata reduce_mean zeros next_batch read_data_sets RunOptions log_dir nn_layer add_run_metadata graph data_dir print merge_all FileWriter close max_steps RunMetadata add_summary run range read_data_sets scalar InteractiveSession get_collection_ref defaultdict cond append concat equal num_elements reduce_all cond asinh reshape sqrt new_mean_squared unsorted_segment_sum values unique indices reshape concat gather reshape concat cast int32 expand_dims reshape concat scatter_nd shape cast int32 stack_tensor fill expand_dims equal seed randint binomial sum make_classification seed randint zeros seed normal zeros concatenate seed normal clip sqrt range join str dtype print astype shape empty range load_batch arange tolist shuffle append range __init__ value AdagradOptimizer test_mnist_mlp_deeper_problems include_mnist_conv_orig_problems test_cifar10_conv_problems save_dir GradientDescentOptimizer include_mnist_mlp_deeper_problems include_mnist_mlp_relu_problems test_mnist_mlp_problems test_mnist_conv_problems test_mnist_conv_orig_problems test_mnist_mlp_relu_problems | # Training Stronger Baselines for Learning to Optimize [](https://opensource.org/licenses/MIT) Code for this paper [Training Stronger Baselines for Learning to Optimize](). Tianlong Chen*, Weiyi Zhang*, Jingyang Zhou, Shiyu Chang, Sijia Liu, Lisa Amini, Zhangyang Wang ## Overview With many efforts devoted to designing more sophisticated **L2O models**, we argue for another orthogonal, under-explored theme: the **training techniques** for those L2O models. We show that even **the simplest L2O model could have been trained much better**. - **Curriculum Learning** We first present a progressive training scheme to gradually increase the optimizer unroll length, to mitigate a well-known L2O dilemma of truncation bias (shorter unrolling) versus gradient explosion (longer unrolling). - **Imitation Learning** We further leverage off-policy imitation learning to guide the L2O learning , by taking reference to the behavior of analytical optimizers. | 1,097 |
Vaibhavck/Style-Transfer-VGG19 | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | utils.py main.py convert_tensor load_image extract_featrues gram_matrix BytesIO convert Compose startswith content open squeeze transpose array clip detach items layer size view | # Style-Transfer-VGG19 Style transfer using pretrained VGG19 model (pytorch). # Reference <ol> <li> Leon A. Gatys’ paper, A Neural Algorithm of Artistic Style : https://arxiv.org/abs/1508.06576 </li> <li> Posts : <ul type="disc"> | 1,098 |
Vanditg/Person-Retrieval-AVSS-2018 | ['person retrieval'] | ['Person Retrieval in Surveillance Video using Height, Color and Gender'] | custom_layers/scale_layer.py parallel_model.py PersonFilter.py Video_demo_person_identification.py custom_layers/googlenet_custom_layers.py modalities/HeightEstimation.py config.py custom_layers/__init__.py coco.py utils.py DenseGender.py DenseColor.py model.py CocoConfig Config conv_block transition_block DenseNet dense_block conv_block transition_block DenseNet dense_block fpn_classifier_graph MaskRCNN compose_image_meta rpn_bbox_loss_graph norm_boxes_graph compute_backbone_shapes rpn_class_loss_graph log DetectionTargetLayer trim_zeros_graph log2_graph parse_image_meta parse_image_meta_graph data_generator rpn_graph identity_block BatchNorm build_fpn_mask_graph load_image_gt build_rpn_targets resnet_graph unmold_image PyramidROIAlign apply_box_deltas_graph denorm_boxes_graph generate_random_rois detection_targets_graph build_detection_targets overlaps_graph mrcnn_bbox_loss_graph conv_block batch_pack_graph ProposalLayer smooth_l1_loss clip_boxes_graph mrcnn_class_loss_graph mrcnn_mask_loss_graph mold_image build_rpn_model DetectionLayer refine_detections_graph ParallelModel build_model gender_classification torso_mask_coordinates height_estimation color_classification color_filter height_filter person_identification compute_ap norm_boxes compute_recall apply_box_deltas compute_overlaps compute_iou resize_image box_refinement_graph generate_pyramid_anchors mold_mask generate_anchors compute_ap_range compute_overlaps_masks denorm_boxes unmold_mask download_trained_weights non_max_suppression minimize_mask resize_mask extract_bboxes trim_zeros compute_matches batch_slice expand_mask box_refinement Dataset InferenceConfig PoolHelper LRN Scale head_feet_points variables main undistortion height_calculation array int transition_block Model load_weights dense_block Input range str str conv_block range concatenate ljust print BACKBONE callable str conv_block identity_block range stack minimum concat maximum set_shape split minimum reshape maximum tile expand_dims split concat reduce_max boolean_mask MASK_SHAPE crop_and_resize gather box_refinement_graph round trim_zeros_graph ROI_POSITIVE_RATIO transpose squeeze pad cast expand_dims range USE_MINI_MASK overlaps_graph cond int TRAIN_ROIS_PER_IMAGE float32 greater maximum int32 split minimum apply_box_deltas_graph reshape clip_boxes_graph concat gather map_fn DETECTION_MAX_INSTANCES stack gather_nd DETECTION_MIN_CONFIDENCE pad set_intersection expand_dims argmax BBOX_STD_DEV Input rpn_graph int_shape less abs cast switch constant not_equal squeeze where mean sparse_categorical_crossentropy gather_nd cast int32 equal IMAGES_PER_GPU batch_pack_graph switch constant smooth_l1_loss squeeze where mean gather_nd cast int32 sum equal reduce_sum sparse_softmax_cross_entropy_with_logits cast gather argmax switch constant reshape smooth_l1_loss mean int64 stack cast gather_nd gather switch constant reshape transpose mean shape int64 stack cast gather_nd gather binary_crossentropy uint8 minimize_mask compose_image_meta extract_bboxes load_mask zeros astype randint resize_image shape warning resize_mask MINI_MASK_SHAPE load_image bool fliplr augment_image to_deterministic int ROI_POSITIVE_RATIO concatenate resize astype TRAIN_ROIS_PER_IMAGE compute_iou choice MASK_SHAPE int32 box_refinement USE_MINI_MASK zeros argmax range sum zip ones compute_overlaps choice RPN_TRAIN_ANCHORS_PER_IMAGE zeros argmax amax len int sort min hstack randint zeros max range split image_ids arange IMAGE_SHAPE compute_backbone_shapes RPN_ANCHOR_RATIOS generate_pyramid_anchors BACKBONE_STRIDES MAX_GT_INSTANCES shape expand_dims load_image_gt build_rpn_targets astype shuffle copy choice generate_random_rois build_detection_targets RPN_ANCHOR_SCALES mold_image RPN_ANCHOR_STRIDE float32 extend zeros len list array boolean_mask reduce_sum cast bool abs append range constant concat float32 cast split constant concat float32 cast split reset_default_graph Input int min max main int append str sorted imwrite torso_mask_coordinates print astype float32 expand_dims argsort resize imread listdir predict print str sorted imwrite astype float32 expand_dims resize imread listdir predict str gender_classification imwrite print putText height_estimation color_classification FONT_HERSHEY_SIMPLEX write height_filter rectangle color_filter append range zeros array range minimum maximum zeros range compute_iou T astype float32 dot sum astype delete float32 compute_iou append astype float32 stack cast float32 log astype float32 log dtype min pad resize randint max pad astype resize zeros bool range astype resize zeros bool range zeros bool astype resize arange concatenate reshape flatten sqrt meshgrid array append generate_anchors range len ones trim_zeros compute_overlaps_masks range len arange concatenate cumsum compute_matches astype float32 maximum sum range len compute_ap format print mean append compute_overlaps set argmax max len list graph_fn zip append range len print array array asarray asmatrix transpose hstack inv squeeze dot true_divide variables undistortion dot matrix cos sin sqrt head_feet_points height_calculation | # Person-Retrieval-AVSS-2018 Implementation of our [IEEE AVSS 2018](https://dblp.org/db/conf/avss/avss2018.html) paper ["Person Retrieval in Surveillance Video using Height, Color, and Gender"](https://ieeexplore.ieee.org/document/8639145). If you find this code useful in your research, please consider citing: ```@inproceedings{galiyawala2018person, title={Person retrieval in surveillance video using height, color and gender}, author={Galiyawala, Hiren and Shah, Kenil and Gajjar, Vandit and Raval, Mehul S}, booktitle={2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)}, pages={1--6}, year={2018}, organization={IEEE} } | 1,099 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.