repo
stringlengths 8
116
| tasks
stringlengths 8
117
| titles
stringlengths 17
302
| dependencies
stringlengths 5
372k
| readme
stringlengths 5
4.26k
| __index_level_0__
int64 0
4.36k
|
---|---|---|---|---|---|
hiromu/robust_audio_ae | ['speech recognition'] | ['Robust Audio Adversarial Example for a Physical Attack'] | make_checkpoint.py record.py tf_logits.py attack.py weight_decay_optimizers.py recognize.py Attack HereBeDragons Wrapper main main make_checkpoint main main callback compute_mfcc get_logits DecoupledWeightDecayExtension AdamWOptimizer extend_with_decoupled_weight_decay MomentumWOptimizer Wrapper print parse_args add_argument ArgumentParser read ParseFromString GraphDef make_checkpoint model audio out trie read exists lm timer stt Model alphabet enableDecoderWithLM len append PyAudio frombuffer open write_wav stop_stream mkstemp terminate ceil range concatenate astype close start_stream int join remove reshape float32 getoutput zeros array as_list rfft T arange concat float32 square reduce_sum matmul pi stack cast sin abs array log join initialize_globals reshape concat BiRNN stack dirname append zeros moments | # Robust Audio Adversarial Example for a Physical Attack This repository includes the implementation of our paper: [Robust Audio Adversarial Example for a Physical Attack](https://www.ijcai.org/proceedings/2019/741). You can find generated examples at [our project page](https://yumetaro.info/projects/audio-ae/). ## Usage ### Preparation 1. Install the dependencies - librosa - numpy - pyaudio - scipy | 2,300 |
hitwsl/transition_disfluency | ['information retrieval'] | ['Transition-Based Disfluency Detection using LSTMs'] | data/get_feature_all.py data/conll2parser.py data/convert-conll2trans.py data/trans.py getfeatures fuzzyMatch set len stdin print strip split append fuzzyMatch range len | This code is part of the paper "Transition-Based Disfluency Detection using LSTMs" accepted at EMNLP 2017 Conference.Chinese disfluency data will be pushed later. The code is extented from https://github.com/clab/stack-lstm-ner and https://github.com/clab/lstm-parser-with-beam-search How to run it: step1:compile:cd compile and see command.txt step2:data precessing: cd data and ./run.sh step3:run: cd run and see *.sh | 2,301 |
hiwonjoon/ICRA2019-Activity-Localize | ['one shot learning'] | ['One-Shot Learning of Multi-Step Tasks from Observation via Activity Localization in Auxiliary Video'] | reacher_classify.py reacher_gen.py nets/ops.py reacher.py inputs/rwd_dataset.py nets/act_model.py train_r_func.py train_policy.py nets/rwd_model.py inputs/act_dataset.py nets/net.py reacher_eval.py main main eval_align PerfectAligner eval ClassifierAligner MamlAligner gen saveCompressed generate train test main Reacher loadCompressed Reacher AlignedReacher loadCompressed Maml _reacher_arch _xent_loss Classifier Net WeightNormLinear SymPadConv2d ResidualBlock DilatedConv3D InstanceNorm WeightNormTransposedConv2d LayerNorm Conv3d DepthConv2d Conv2d TransposedConv2d WeightNormConv2d Lrelu BatchNorm WeightNormSymPadConv2d Linear _aligner_arch _reacher_arch OrderBasedRewardFunc load_partial set_random_seed save xrange Session exists run seed start_queue_runners FileWriter mkdir ConfigProto build_queue print graph write tqdm Reacher Coordinator finalize add_summary build_queue_triplet combinations items align _IoU len shuffle tqdm _build_fvs_traj append range enumerate load seed eval_align print group set_random_seed Reacher ClassifierAligner MamlAligner finalize append global_variables_initializer ConfigProto range Session local_variables_initializer run permutation set_targets_color count list render append range update set_goals act set unique make combinations action_space SmallReactivePolicy observation_space tqdm reset step array len str list items print write_videofile ImageSequenceClip mkdir Path generate saveCompressed exists enumerate load set_global_seeds learn VecNormalize DummyVecEnv ConfigProto __enter__ Session str Path PerfectAligner AlignedReacher one_hot | # One-Shot Learning of Multi-Step Tasks from Observation via Activity Localization in Auxiliary Video (ICRA 2019) Wonjoon Goo and Scott Niekum, University of Texas at Austin  This repository contains codes for the ICRA 2019 paper. If you use this code as part of any published research, please consider referring the following paper. ``` @inproceedings{Goo2019, title = {"One-Shot Learning of Multi-Step Tasks from Observation via Activity Localization in Auxiliary Video"}, author = {Wonjoon Goo and Scott Niekum}, year = {2019}, booktitle = {2019 IEEE International Conference on Robotics and Automation (ICRA)}, | 2,302 |
hjeffreywang/Failed_LSTM_network | ['stock market prediction'] | ['Forecasting directional movements of stock prices for intraday trading using LSTM and random forests'] | NextDay-240,1-RF.py Intraday-240,1-LSTM.py Statistics.py NextDay-240,1-LSTM.py Intraday-240,3-RF.py Intraday-240,3-LSTM.py Intraday-240,1-RF.py reshaper trainer callbacks_req simulate trained create_stock_data makeLSTM create_label scalar_normalize simulate create_stock_data create_label trainer reshaper trainer callbacks_req simulate trained create_stock_data makeLSTM create_label scalar_normalize simulate create_stock_data create_label trainer makeCuDNNLSTM trainer callbacks_req simulate trained create_stock_data makeSimpleLSTM Normalize create_label simulate create_stock_data create_label trainer Statistics Model Input compile summary str EarlyStopping ModelCheckpoint CSVLogger array swapaxes split list toarray callbacks_req reshape hstack makeLSTM shuffle OneHotEncoder set fit reshape list load_model set print sorted DataFrame keys print cumsum list apply list shift drop dropna DataFrame len transform RobustScaler fit seed print astype RandomForestClassifier range reshaper pct_change Series array Model Input compile summary Model Input compile summary makeCuDNNLSTM makeSimpleLSTM transform fit | # [Forecasting directional movements of stock-prices for intraday trading using LSTM and random-forest](https://arxiv.org/abs/2004.10178) **https://arxiv.org/abs/2004.10178** <br> **Pushpendu Ghosh, Ariel Neufeld, Jajati K Sahoo** We employ both random forests on the one hand and LSTM networks (more precisely CuDNNLSTM) on the other hand as training methodology to analyze their effectiveness in forecasting out-of-sample directional movements of constituent stocks of the S&P 500, for intraday trading, from January 1993 till December 2018. #### Requirements ``` pip install scikit-learn==0.20.4 pip install tensorflow==1.14.0 ``` ## Plots | 2,303 |
hjeun/idu | ['action detection'] | ['Learning to Discriminate Information for Online Action Detection'] | patches/tensorflow._api.v1.keras.layers/__init__.py thumos14/model_IDU.py thumos14/test.py thumos14/utils.py tvseries/model_IDU.py patches/tensorflow.python.keras.layers/__init__.py patches/tensorflow.python.keras.layers/recurrent.py tvseries/test.py tvseries/utils.py RNN SimpleRNN UnifiedLSTM _generate_zero_filled_state_for_cell _generate_dropout_mask PeepholeLSTMCell _is_multiple_state SimpleRNNCell _canonical_to_params GRUCell GRU StackedRNNCells LSTMCell standard_lstm IDUCell _generate_zero_filled_state _standardize_args cudnn_lstm IDU LSTM Model next_batch_for_test get_relevance get_actionness frame_level_map_n_cap Model next_batch_for_test get_relevance get_actionness frame_level_map_n_cap int_shape rnn transpose concat _canonical_to_params cudnn_rnn expand_dims split to_list_or_none tuple isinstance dtype is_sequence argmax astype float32 astype float32 int32 label argmax get_actionness deepcopy append get_relevance len argsort append sum range enumerate mean | # Information Discrimination Units (IDU) **Learning to Discriminate Information for Online Action Detection, CVPR 2020** Hyunjun Eun, Jinyoung Moon, Jongyoul Park, Chanho Jung, Changick Kim [[`arXiv`](https://arxiv.org/abs/1912.04461)] ## Introduction <div align="center"> <img src="figures/framework.png" width="1000px" /> </div> This is official implementation of Information Discrimintation Units (IDU). For online action detectoin, we investigate on the question of "how RNNs can learn to explicitly discriminate relevant information from irrelevant information for detecting actions in the present". To this end, we propose a novel recurrent unit that extends GRU with a mechanism utilizing current information and an early embedding module. We perform extensive experiments on two benchmark datasets, where our Information Discrimination Networks (IDN) achieve stateof-the-art performances of 86.1% mcAP and 60.3% mAP on [TVSeries](https://homes.esat.kuleuven.be/psi-archive/rdegeest/TVSeries.html) and [THUMOS-14](https://www.crcv.ucf.edu/THUMOS14/), respectively. ## Updates | 2,304 |
hjian42/Geo-Twitter2019 | ['word sense induction'] | ['Breaking Sticks and Ambiguities with Adaptive Skip-gram'] | preprocessing/util.py preprocessing/preprocess_tokenize.py preprocessing/remove_duplicate.py preprocessing/odict.py preprocessing/json2txt.py preprocessing/word_geo.py preprocessing/merge_files.py preprocessing/anonymize_data.py preprocessing/emoticons.py preprocessing/extract.py preprocessing/backup.py data_collection/test.py preprocessing/prepare_data_adagram.py data_collection/crawl.py preprocessing/twokenize.py MyListener has_url parse is_normal_user get_pos_tokens get_tokens analyze_tweet flush_cur_user prune canonicalize_rare_word save_word_docs save_vocab_info get_tokens Numberizer merge_jsons count_stats merge_txts odict clean_tweets clean_tweets get_pos_tokens get_tokens post_process optional align AlignmentFailed neg_lookahead simple_tokenize unicodify squeeze_whitespace edge_punct_munge Tokenization pos_lookahead regexify_abbrev unprotected_tokenize regex_or tokenize xprod fix_stdio ShutUpAboutBrokenPipe flatten na_rm Struct argmax compose DataFrame stringify stable_uniq set_and unicodify Counter set_or myjoin fancy_sub compose2 product tsv_reader read_tsv write_tsv fullgroupby smart_fmt uniq_c which flip DefaultMapping write_csv chaincompose dgroupby smart_time_fmt read_csv get_tokens dumps setattr first_parse search lower tokenize pos_tag word_tokenize search sub get defaultdict print canonicalize_rare_word add Numberizer print encode enumerate format print format close open print len format set sub compile join list any range len isinstance squeeze_whitespace align Tokenization post_process end search span edge_punct_munge append range finditer len append search sub sub ShutUpAboutBrokenPipe stdout open isinstance DictReader list close open sorted writerow close dict DictWriter keys open list tsv_reader close open defaultdict append add set update set sort compose2 reversed repl_fn end write finditer StringIO sort fmt1 | # Geo-Twitter2019 ## Description In this project, we use a novel non-parametric skip-gram model to capture the dialectal changes of English on multiple resolutions. This repository contains the tweets ids we used for training the model. You are free to crawl the data using these ids and preprocess the data using our tools to replicate our research results. ## Dataset | Number | USA | UK | Total | |-------- |------------ |------------ |------------ | | tweet | 2,075,394 | 1,088,232 | 3,163,626 | | token | 41,637,107 | 22,012,953 | 63,650,060 | | term | 865,784 | 469,570 | 1,167,790 | note: CMU geo data only contain 378K tweets | 2,305 |
hjjpku/adaptive_sampler | ['action recognition'] | ['Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?'] | utils/eval_ucf101.py utils/video_jpg.py opts.py models/resnext.py train.py datasets/hmdb51.py dataset.py models/wide_resnet.py models/densenet.py utils/ucf101_json.py utils.py utils/eval_kinetics.py datasets/activitynet.py models/pre_act_resnet.py temporal_transforms.py test.py utils/kinetics_json.py datasets/ucf101.py utils/eval_hmdb51.py utils/hmdb51_json.py mean.py utils/n_frames_ucf101_hmdb51.py datasets/kinetics.py main.py target_transforms.py model.py utils/n_frames_kinetics.py utils/video_jpg_ucf101_hmdb51.py utils/fps.py validation.py spatial_transforms.py models/resnet.py utils/video_jpg_kinetics.py get_training_set get_test_set get_validation_set get_std get_mean generate_model parse_opts MultiScaleCornerCrop CenterCrop MultiScaleRandomCrop ToTensor Compose Scale Normalize RandomHorizontalFlip CornerCrop ClassLabel VideoID Compose TemporalBeginCrop LoopPadding TemporalCenterCrop TemporalRandomCrop calculate_video_results test train_epoch calculate_accuracy AverageMeter Logger load_value_file val_epoch modify_frame_indices get_class_labels load_annotation_data video_loader get_end_t make_dataset ActivityNet accimage_loader get_default_image_loader get_default_video_loader make_untrimmed_dataset pil_loader get_video_names_and_annotations get_class_labels load_annotation_data video_loader make_dataset accimage_loader HMDB51 get_default_image_loader get_default_video_loader pil_loader get_video_names_and_annotations get_class_labels load_annotation_data video_loader make_dataset accimage_loader Kinetics get_default_image_loader get_default_video_loader pil_loader get_video_names_and_annotations UCF101 get_class_labels load_annotation_data video_loader make_dataset accimage_loader get_default_image_loader get_default_video_loader pil_loader get_video_names_and_annotations get_fine_tuning_parameters DenseNet densenet201 densenet169 densenet264 _DenseLayer _DenseBlock _Transition densenet121 conv3x3x3 get_fine_tuning_parameters resnet50 downsample_basic_block resnet152 PreActivationBasicBlock resnet34 resnet200 PreActivationBottleneck resnet18 PreActivationResNet resnet101 conv3x3x3 get_fine_tuning_parameters ResNet downsample_basic_block resnet50 Bottleneck resnet152 resnet34 resnet200 resnet18 resnet10 BasicBlock resnet101 ResNeXtBottleneck conv3x3x3 get_fine_tuning_parameters resnet50 downsample_basic_block ResNeXt resnet152 resnet101 conv3x3x3 get_fine_tuning_parameters WideBottleneck resnet50 downsample_basic_block WideResNet HMDBclassification compute_video_hit_at_k get_blocked_videos KINETICSclassification compute_video_hit_at_k UCFclassification compute_video_hit_at_k convert_hmdb51_csv_to_activitynet_json get_labels convert_csv_to_dict load_labels convert_kinetics_csv_to_activitynet_json convert_csv_to_dict class_process class_process load_labels convert_ucf101_csv_to_activitynet_json convert_csv_to_dict class_process class_process video_path UCF101 ActivityNet Kinetics annotation_path HMDB51 video_path UCF101 n_val_samples ActivityNet Kinetics annotation_path HMDB51 video_path UCF101 ActivityNet Kinetics annotation_path HMDB51 get_fine_tuning_parameters in_features densenet264 DataParallel ft_begin_index resnet34 resnet152 cuda load_state_dict resnet200 resnet101 resnet18 format resnet50 resnet10 n_finetune_classes Linear load densenet169 densenet201 print pretrain_path densenet121 parse_args set_defaults add_argument ArgumentParser topk size mean stack append range update time format model print Variable cpu AverageMeter size eval softmax calculate_video_results append range enumerate len model zero_grad save cuda log update format size item enumerate join time result_path criterion backward print Variable calculate_accuracy AverageMeter train step len topk view size t eq item update time format criterion model print Variable calculate_accuracy AverageMeter size eval item cuda log enumerate len join format image_loader append exists get_default_image_loader append enumerate append items list format append join format items list format join get_class_labels deepcopy load_annotation_data print modify_frame_indices len load_value_file ceil max range append get_video_names_and_annotations sort listdir items list format join get_class_labels deepcopy load_annotation_data print modify_frame_indices len load_value_file get_end_t ceil max range append get_video_names_and_annotations int min DenseNet DenseNet DenseNet DenseNet append format range named_parameters data isinstance FloatTensor Variable zero_ avg_pool3d cuda cat PreActivationResNet PreActivationResNet PreActivationResNet PreActivationResNet PreActivationResNet PreActivationResNet ResNet ResNet ResNet ResNet ResNet ResNet ResNet ResNeXt ResNeXt ResNeXt WideResNet reset_index size tolist mean unique zeros values enumerate Request urlopen format ceil join read_csv append listdir range len append join listdir update get_labels convert_csv_to_dict read_csv update load_labels convert_csv_to_dict join int print sort append listdir split append range update load_labels convert_csv_to_dict format call mkdir splitext exists | # 3D ResNets for Action Recognition
## Update (2018/2/21)
Our paper "Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?" is accepted to CVPR2018!
We update the paper information.
## Update (2018/01/16)
We uploaded some of fine-tuned models on UCF-101 and HMDB-51.
| 2,306 |
hkchengrex/CascadePSP | ['scene parsing', 'semantic segmentation'] | ['CascadePSP: Toward Class-Agnostic and Very High-Resolution Segmentation via Global and Local Refinement'] | dataset/split_dataset.py models/sync_batchnorm/comm.py scripts/PASCAL_FINE/convert_deeplab_outputs.py segmentation-refinement/setup.py scripts/ade20K/convert_refinenet_output.py models/sync_batchnorm/batchnorm.py models/sobel_op.py scripts/BIG/binary_mask_negate.py util/model_saver.py scripts/ade20K/all_plus_one.py segmentation-refinement/test.py models/sync_batchnorm/replicate.py dataset/reseed.py train.py util/log_integrator.py segmentation-refinement/segmentation_refinement/download.py segmentation-refinement/segmentation_refinement/models/psp/pspnet.py segmentation-refinement/segmentation_refinement/main.py eval_post_ade.py util/image_saver.py models/sync_batchnorm/__init__.py segmentation-refinement/segmentation_refinement/eval_helper.py eval_memory_usage.py segmentation-refinement/segmentation_refinement/models/psp/extractors.py util/de_transform.py util/boundary_modification.py models/sync_batchnorm/unittest.py dataset/offline_dataset.py scripts/PASCAL_FINE/convert_psp_outputs.py scripts/download_training_dataset.py eval_post.py dataset/make_bb_trans.py util/compute_boundary_acc.py scripts/ade20K/ade_expand_inst.py util/metrics_compute.py dataset/online_dataset.py util/logger.py models/psp/pspnet.py scripts/BIG/convert_binary.py eval_helper.py util/hyper_para.py segmentation-refinement/segmentation_refinement/__init__.py util/util.py scripts/BIG/convert_refinenet_output.py eval.py util/file_buffer.py scripts/BIG/convert_deeplab_outputs.py models/psp/extractors.py scripts/PASCAL_FINE/convert_refinenet_output.py dataset/__init__.py Parser safe_forward process_high_res_im process_im_single_pass safe_forward get_iu color_map get_iu worker_init_fn scale_bb_by is_bb_overlap get_bb_position OfflineDataset OnlineTransformDataset reseed SplitTransformDataset SobelComputer SobelOperator ResNet resnet50 Bottleneck conv3x3 load_weights_sequential BasicBlock PSPModule PSPUpsample PSPNet _sum_ft SynchronizedBatchNorm2d _unsqueeze_ft _SynchronizedBatchNorm SynchronizedBatchNorm1d SynchronizedBatchNorm3d SyncMaster FutureResult SlavePipe execute_replication_callbacks CallbackContext DataParallelWithCallback patch_replication_callback TorchTestCase as_numpy get_disk_kernel download_file_from_google_drive save_response_content get_confirm_token resize_max_side safe_forward process_high_res_im process_im_single_pass Refiner ResNet conv3x3 resnet50 Bottleneck PSPModule PSPUpsample RefinementModule modify_boundary compute_boundary_acc_multi_class compute_boundary_acc get_disk_kernel perturb_seg compute_iou get_random_structure random_erode random_dilate FileBuffer HyperParameters tensor_to_numpy detach_to_cpu transpose_np get_image_array pool_images vis_prediction tensor_to_gray_im tensor_to_seg tensor_to_np_float tensor_to_im tensor_to_numpy detach_to_cpu fix_width_trunc BoardLogger Integrator compute_loss_and_metrics get_orig_iou_hook get_new_iou_hook get_iou_gain ModelSaver resize_max_side resize_min_side compute_tensor_iou compute_tensor_iu shape cuda model safe_forward zeros_like float min where shape interpolate resize_max_side empty_cache to max range resize_max_side shape safe_forward interpolate count_nonzero zeros bitget array range seed any int min max seed manual_seed items list OrderedDict shape load_state_dict cat load_url ResNet load_weights_sequential list hasattr __data_parallel_replicate__ modules enumerate len replicate data isinstance astype get get_confirm_token save_response_content Session items list startswith max zeros int sorted list normal drawContours zeros_like concatenate perturb_seg sort findContours copy sample RETR_LIST append CHAIN_APPROX_NONE range moments enumerate int uint8 get_disk_kernel astype shape sum range int uint8 get_disk_kernel zeros_like astype morphologyEx MORPH_GRADIENT shape unique sum range randint dilate randint get_random_structure randint erode get_random_structure int threshold print copy random_erode shape random_dilate randint range astype astype tensor_to_numpy detach_to_cpu transpose_np tensor_to_numpy detach_to_cpu transpose_np inv_seg_trans tensor_to_numpy detach_to_cpu transpose_np inv_im_trans get items list LINE_AA putText astype FONT_HERSHEY_SIMPLEX shape iter split zeros next enumerate values len detach_to_cpu clip replace inv_im_trans reshape min base_transform inv_seg_trans append range len l1_loss binary_cross_entropy_with_logits compute_tensor_iu float mse_loss range squeeze sum squeeze sum min | # CascadePSP: Toward Class-Agnostic and Very High-Resolution Segmentation via Global and Local Refinement [Ho Kei Cheng*](https://hkchengrex.github.io/), Jihoon Chung*, Yu-Wing Tai, Chi-Keung Tang [[arXiv]](https://arxiv.org/abs/2005.02551) [[PDF]](https://arxiv.org/pdf/2005.02551) [[Supplementary Information (Comparisons with DenseCRF included!)]](https://openaccess.thecvf.com/content_CVPR_2020/supplemental/Cheng_CascadePSP_Toward_Class-Agnostic_CVPR_2020_supplemental.pdf) [[Supplementary image results]](http://hkchengad.student.ust.hk/CascadePSP/CascadePSP-supp-images.pdf)  ## Introduction CascadePSP is a deep learning model for high-resolution segmentation refinement. This repository contains our PyTorch implementation with both training and testing functionalities. We also provide the annotated UHD dataset **BIG** and the pretrained model. Here are some refinement results on high-resolution images. | 2,307 |
hlwang1124/NIM | ['anomaly detection', 'semantic segmentation'] | ['Applying Surface Normal Information in Drivable Area and Road Anomaly Detection for Ground Mobile Robots'] | demo.py CalibInfo normal_visualization NIM | # Normal Inference Module This is a PyTorch demo of the Normal Inference Module (NIM), presented in our IROS 2020 paper, [Applying Surface Normal Information in Drivable Area and Road Anomaly Detection for Ground Mobile Robots](https://arxiv.org/abs/2008.11383). Our NIM can be used effectively for estimating surface normal information from depth images. The code has been tested in Python 3.6 and PyTorch 1.7. <p align="center"> <img src="doc/NIM.png" width="100%"/> </p> We provide two examples in `examples`, where `rgb`, `depth_u16` and `calib` contain RGB images, depth images and calibration files, respectively. These examples belong to the [KITTI road dataset](http://www.cvlibs.net/datasets/kitti/eval_road.php). Run `demo.py`, and then the surface normal estimation will be saved in `examples/normal`. Please note that our NIM can run in two different ways. Set `sign_filter=True`, and then our NIM will additionally utilize a sign filter. If you use this code for your research, please cite our paper. ``` @inproceedings{wang2020applying, | 2,308 |
hmdolatabadi/AdvFlow | ['adversarial attack'] | ['Black-box Adversarial Example Generation with Normalizing Flows', 'AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows'] | train.py data.py classifier_loader.py attack_imagenet.py resnet.py opts.py config.py attack.py vgg.py wide_resnets.py imagenet.py attack_greedy.py model.py weights_init load_pretrained_cifar_resnet50 load_pretrained_ImageNet load_pretrained_wide_resnet add_noise ReshapeTransform CropTransform RandomHorizontalFlipTensor ImageNet32 UnlabelledImageFolder ImageNet64 ImageNet64Fast optim_step load init_model load_target_model random_orthog save weights_init parse PreActBlock ResNet ResNet18 Bottleneck ResNet34 ResNet101 test conv3x3 ResNet50 PreActBottleneck BasicBlock ResNet152 img_tile sample_outputs dummy_loss vgg19 VGG vgg16_bn _vgg vgg19_bn vgg11_bn vgg13 vgg11 make_layers vgg13_bn vgg16 conv_init conv3x3 wide_basic Wide_ResNet Wide_ResNet DataParallel load_state_dict apply DataParallel ResNet50 load_state_dict apply DataParallel inception_v3 normal_ __name__ fill_ uniform_ add_image_noise svd T randn requires_grad fill_ init_scale named_parameters cuda split load_state_dict step zero_grad parameters eval load_pretrained_resnet50 load_pretrained_inception cuda load_pretrained_wide_resnet filename batch_size add_argument lr ArgumentParser n_epochs parse_args load_file randn Variable ResNet18 print size net transpose sqrt floor iterator ceil zeros len Conv2d load_state_dict_from_url make_layers VGG load_state_dict constant xavier_uniform bias weight __name__ | # AdvFlow *Hadi M. Dolatabadi, Sarah Erfani, and Christopher Leckie 2020* [](https://arxiv.org/abs/2007.07435) [](https://opensource.org/licenses/MIT) This is the official implementation of NeurIPS 2020 paper [_AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows_](https://arxiv.org/abs/2007.07435). A small part of this work, the Greedy AdvFlow, has been published in [ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models](https://invertibleworkshop.github.io/accepted_papers/pdfs/36.pdf). A blog post explaining our approach can be found [here](https://hmdolatabadi.github.io/posts/2020/10/advflow/). <p align="center"> <img src="https://raw.githubusercontent.com/hmdolatabadi/hmdolatabadi.github.io/master/images/advflow/AdvFlow.gif" width="95%"> </p> ## Requirements | 2,309 |
hmishra2250/NTM-One-Shot-TF | ['one shot learning'] | ['One-shot Learning with Memory-Augmented Neural Networks'] | Omniglot.py MANN/Model.py MANN/Utils/tf_utils.py MANN/Utils/Metrics.py MANN/Utils/similarities.py TestUpd.py MANN/Utils/Images.py MANN/Utils/Generator.py Examples/Omniglot.py testing.py MANN/Test_Model.py MANN/Utils/init.py omniglot omniglot omniglot OmniglotGenerator time_offset_label get_shuffled_images load_transform shared_one_hot weight_and_bias_init shared_zeros shared_glorot_uniform accuracy_instance cosine_similarity shared_float32 update_tensor accuracy_instance memory_augmented_neural_network OmniglotGenerator argmax run softmax_cross_entropy_with_logits str merge_all placeholder reduce_sum append range one_hot nb_samples_per_class FileWriter eval InteractiveSession as_list time minimize print reshape graph AdamOptimizer reduce_mean add_summary global_variables_initializer scalar Variable concat_v2 assign update_tensor Print shuffle list zip minimum asarray imresize shift maximum rotate imread max sqrt sum prod isinstance zeros dtype constant scan reduce_mean cast transpose matmul as_list dtype print scan cast | # One Shot Learning using Memory-Augmented Neural Networks in Tensorflow. Update: added support for Tensorflow v1*. Tensorflow implementation of the paper *One-shot Learning with Memory-Augmented Neural Networks*. Current Progress of Implementation: - [x] Utility Functions: - [x] Image Handler - [x] Metrics (Accuracy) - [x] Similarities (Cosine Similarity) - [x] LSTM Controller and Memory Unit - [x] Batch Generators | 2,310 |
hoangminhle/SIMILE | ['imitation learning'] | ['Smooth Imitation Learning for Online Sequence Prediction'] | camera_simile.py roll_and_smooth_test_trajectory interpolate_learned_policy form_state_vector_pos interpolate_and_smooth_learned_policy collect_learned_trajectory rollout_nofilter_test_trajectory form_augmented_context_train calculate_smooth_coeff calculate_residual form_auxiliary_position_vector residual_diff_smooth equivalent_position_coeff interpolate_test_policy position_smooth rollout_nofilter_learned_trajectory position_and_velocity_smooth joint_loss_calculation form_state_vector roll_and_smooth_learned_trajectory gather_rows_to_delete form_auxiliary_velocity_vector collect_test_trajectory_position_velocity residual_smooth interpolate_and_smooth_test_policy test_error_calculation collect_test_trajectory velocity_smooth collect_learned_trajectory_position_velocity mean sqrt shape zeros diff mean zeros diff arange inner copy shape append zeros range predict hstack inner shape zeros range predict len arange concatenate inner copy shape append zeros range predict diff arange concatenate inner copy shape append zeros range predict diff concatenate hstack inner shape zeros diff range len concatenate hstack inner len shape zeros range diff position_and_velocity_smooth velocity_smooth position_smooth Ridge coef_ insert fit delete copy roll append empty array range diff Ridge coef_ insert hstack fit delete copy roll append empty range diff Ridge coef_ delete copy roll append empty range fit Ridge coef_ delete copy roll append empty range fit Ridge coef_ insert fit delete copy roll append empty array range diff hstack copy roll vstack empty range hstack roll vstack empty range insert copy roll empty range diff append range append array range form_auxiliary_position_vector hstack form_auxiliary_velocity_vector form_state_vector_pos hstack copy roll vstack empty range copy roll dot shape zeros empty range arange inner shape append zeros range predict inner hstack shape append zeros range len arange inner copy shape append zeros range predict hstack inner copy shape append zeros range len arange inner copy shape append zeros range predict hstack inner copy shape append zeros range predict len | # SIMILE Code to implement SIMILE algorithm from the paper entitled "Smooth Imitation Learning for Online Sequence Prediction" from ICML 2016 | 2,311 |
hobincar/RecNet | ['video captioning'] | ['Reconstruction Network for Video Captioning'] | models/temporal_attention.py configs/train_stage2.py configs/split.py configs/run.py train.py run.py splits/MSVD.py models/caption_generator.py splits/MSR-VTT.py utils.py models/global_reconstructor.py loader/data_loader.py losses.py models/local_reconstructor.py loader/MSRVTT.py configs/train_stage1.py loader/MSVD.py loader/transform.py models/decoder.py global_reconstruction_loss local_reconstruction_loss entropy_loss run log_train build_model log_val log_test main parse_args build_loaders get_lr get_predicted_captions evaluate score load_checkpoint idxs_to_sentence save_result cls_to_dict test get_groundtruth_captions save_checkpoint parse_batch dict_to_cls Struct train calc_scores LossChecker RunConfig MSRVTTSplitConfig MSVDSplitConfig FeatureConfig VocabConfig DecoderConfig TrainConfig MSRVTTLoaderConfig MSVDLoaderConfig FeatureConfig VocabConfig DecoderConfig LocalReconstructorConfig GlobalReconstructorConfig TrainConfig MSRVTTLoaderConfig MSVDLoaderConfig CustomVocab CustomDataset Corpus MSRVTTVocab MSRVTT MSRVTTDataset MSVD MSVDVocab MSVDDataset Truncate ToIndex TrimExceptAscii RemovePunctuation ToTensor Lowercase PadToLength ZeroPadIfLessThan RandomSample PadLast SplitWithWhiteSpace PadFirst UniformSample NLTKWordpunctTokenizer TrimIfLongerThan CaptionGenerator Decoder GlobalReconstructor LocalReconstructor TemporalAttention load_metadata save_video load_splits load_videos save_metadata split load_metadata save_video load_splits load_videos save_metadata split mean log_softmax sum softmax FloatTensor size expand mean type sum mse_loss mse_loss MSVD LocalReconstructor score n_vocabs min_count cuda n_words get_predicted_captions n_vocabs_untrimmed corpus save_result load_state_dict n_words_untrimmed MSRVTT GlobalReconstructor CaptionGenerator vocab format result_dpath get_groundtruth_captions dict_to_cls load join print Decoder max_caption_len add_argument ArgumentParser MSVD format print n_vocabs_untrimmed n_words_untrimmed MSRVTT n_vocabs min_count n_words format pretrained_reconstructor_fpath LocalReconstructor print Decoder load_state_dict max_caption_len pretrained_decoder_fpath GlobalReconstructor cuda CaptionGenerator tx_train_cross_entropy_loss tx_lr format tx_train_entropy_loss metrics print tx_train_loss tx_train_reconstruction_loss add_scalar tx_val_entropy_loss format metrics print tx_val_cross_entropy_loss tx_val_reconstruction_loss tx_val_loss add_scalar print format metrics add_scalar log_train reg_lambda rnn_teacher_forcing_ratio gradient_clip model_id ReduceLROnPlateau save_checkpoint log_dpath build_loaders Adam epochs recon_lambda parse_args get_lr range vocab SummaryWriter format build_model log_val test evaluate print load_checkpoint TrainConfig parameters log_test train step cuda cat model clip_grad_norm_ zero_grad set_description n_vocabs LossChecker view local_reconstruction_loss parse_batch update format nll_loss mean item backward global_reconstruction_loss tqdm parameters zeros step entropy_loss update entropy_loss view model global_reconstruction_loss nll_loss mean local_reconstruction_loss eval parse_batch item n_vocabs zeros enumerate LossChecker update eval build_onlyonce_iter describe idx2word transpose idxs_to_sentence parse_batch iter zip append calc_scores compute_score zip score get_groundtruth_captions get_predicted_captions param_groups append join item isclass getattr dir dir getattr Struct setattr load load_state_dict dirname save makedirs list keys dirname makedirs format format join models format rnn_num_layers embedding_size rnn_type rnn_attn_size strftime gmtime rnn_hidden_size max_caption_len size size type defaultdict File video_fpath format value print File close print format val_video_fpath val_metadata_fpath load_metadata train_metadata_fpath save_video load_splits load_videos train_video_fpath save_metadata test_metadata_fpath test_video_fpath caption_fpath reset_index read_csv to_csv | # RecNet This project tries to implement *RecNet* proposed in **[Reconstruction Network for Video Captioning](http://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_Reconstruction_Network_for_CVPR_2018_paper.pdf) [1], *CVPR 2018***. # Environment * Ubuntu 16.04 * CUDA 9.0 * cuDNN 7.3.1 * Nvidia Geforce GTX Titan Xp 12GB # Requirements * Java 8 * Python 2.7.12 | 2,312 |
hobincar/SGN | ['video captioning'] | ['Semantic Grouping Network for Video Captioning'] | models/transformer/Models.py models/transformer/SubLayers.py models/visual_encoder.py train.py splits/MSVD.py splits/MSR-VTT.py models/attention.py evaluate.py utils.py models/transformer/Modules.py loader/data_loader.py models/semantic_grouping_network.py extract_negative_videos.py models/transformer/Constants.py loader/MSRVTT.py loader/transform.py loader/MSVD.py config.py models/decoder.py models/transformer/Layers.py Config VocabConfig VisualEncoderConfig DecoderConfig PhraseEncoderConfig MSRVTTLoaderConfig MSVDSplitConfig MSRVTTSplitConfig MSVDLoaderConfig parse_args run main extract_negative_samples load_MSRVTT_captions load_MSVD_captions log_val main log_train get_teacher_forcing_ratio get_lr evaluate build_model score count_parameters build_YOLO_iter idxs_to_sentence set_random_seed save_checkpoint parse_batch train calc_scores build_loaders LossChecker CustomVocab CustomDataset Corpus MSRVTTVocab MSRVTT MSRVTTDataset MSVD MSVDVocab MSVDDataset Truncate ToIndex TrimExceptAscii RemovePunctuation ToTensor Lowercase PadToLength ZeroPadIfLessThan RandomSample PadLast SplitWithWhiteSpace PadFirst UniformSample NLTKWordpunctTokenizer TrimIfLongerThan SemanticAttention SemanticAlignment SemanticGroupingNetwork VisualEncoder DecoderLayer EncoderLayer get_attn_key_pad_mask get_subsequent_mask get_sinusoid_encoding_table Encoder get_non_pad_mask ScaledDotProductAttention MultiHeadAttention PositionwiseFeedForward load_metadata save_video load_splits load_videos save_metadata split load_metadata save_video load_splits load_videos save_metadata split format format join format add_argument ArgumentParser load build_model score print eval load_state_dict cuda build_loaders defaultdict Compose zip append read_csv values items list defaultdict Compose append values join list defaultdict format items print words Counter set tqdm append float sum keys values len extract_negative_samples format load_MSRVTT_captions load_MSVD_captions tx_lr format tx_teacher_forcing_ratio metrics print tx_train_contrastive_attention_loss tx_train_loss tx_train_cross_entropy_loss add_scalar format metrics print tx_val_cross_entropy_loss tx_val_contrastive_attention_loss tx_val_loss add_scalar float log_train count_parameters min_teacher_forcing_ratio gradient_clip score set_random_seed save_checkpoint CA_lambda get_teacher_forcing_ratio log_dpath cuda build_loaders seed step get_lr range SummaryWriter build_model log_val Adamax CosineAnnealingLR metrics evaluate print parameters train epochs max_teacher_forcing_ratio add_scalar MSVD format print n_vocabs_untrimmed n_words_untrimmed MSRVTT n_vocabs min_count n_words SGN Decoder VisualEncoder PhraseEncoder max_caption_len PS_threshold cuda model clip_grad_norm_ zero_grad set_description n_vocabs LossChecker view FloatTensor parse_batch binary_cross_entropy_with_logits update format nll_loss mean item backward tqdm parameters step update view model FloatTensor nll_loss mean eval parse_batch binary_cross_entropy_with_logits item n_vocabs LossChecker items list defaultdict stack parse_batch iter append keys values enumerate build_refs describe build_YOLO_iter tqdm eval zip calc_scores compute_score zip param_groups append join item state_dict dirname save makedirs seed str manual_seed_all manual_seed array cos sin size eq PAD expand triu size ones expand defaultdict File video_fpath format value print File close print format val_video_fpath val_metadata_fpath load_metadata train_metadata_fpath save_video load_splits load_videos train_video_fpath save_metadata test_metadata_fpath test_video_fpath caption_fpath reset_index read_csv to_csv | # Semantic Grouping Network for Video Captioning Hobin Ryu, Sunghun Kang, Haeyong Kang, and Chang D. Yoo. AAAI 2021. [[arxiv]](https://arxiv.org/abs/2102.00831) # Environment * Ubuntu 16.04 * CUDA 9.2 * cuDNN 7.4.2 * Java 8 * Python 2.7.12 * PyTorch 1.1.0 | 2,313 |
hollance/BlazeFace-PyTorch | ['face detection'] | ['BlazeFace: Sub-millisecond Neural Face Detection on Mobile GPUs'] | blazeface.py BlazeFace intersect BlazeBlock jaccard FinalBlazeBlock overlap_similarity clamp size min expand max intersect expand_as | hollance/BlazeFace-PyTorch | 2,314 |
hologerry/Attr2Font | ['style transfer'] | ['Attribute2Font: Creating Fonts You Want From Attributes'] | dataloader.py options.py main.py vgg_cx.py model.py ImageAttr rreplace get_loader test interp main train test_one_epoch get_model_parameters CXLoss StyleEncoder ResidualBlock AttributeAwareUp Down SelfAttention tile_like CALayer2d AttrClassifier DiscriminatorWithClassifier GeneratorStyle get_parser conv2d conv VGG19_CX rsplit ImageAttr Compose ToTensor Resize DataLoader Normalize append generator vgg19 criterion_ce batch_size criterion_pixel zero_grad DataParallel unsqueeze lambda_char save save_image attribute_embed open init_epoch dataset_name load_model multi_gpu view Embedding step Adam load_state_dict attr_unsuper_tolearn discriminator experiment_name to range cat detach state_dict attr_embed size lambda_GAN eval timedelta lambda_l1 n_epochs flush enumerate load join time backward print write criterion_GAN DiscriminatorWithClassifier parameters sigmoid repeat data_root criterion_cx GeneratorStyle get_loader attr_channel dis_pred unsuper_num len print join load load_state_dict DataParallel open dataset_name multi_gpu check_freq Embedding test_epoch experiment_name to range attr_embed n_epochs join data_root GeneratorStyle get_loader attr_channel test_one_epoch unsuper_num load join dataset_name multi_gpu attr_embed print Embedding DataParallel data_root load_state_dict GeneratorStyle get_loader attr_channel experiment_name to unsuper_num join experiment_name print test interp parse_args train get_parser makedirs size repeat view size list parameters add_argument ArgumentParser | # Attr2Font ## Introduction This is the official PyTorch implementation of the **Attribute2Font: Creating Fonts You Want From Attributes**.  Paper: [arXiv](https://arxiv.org/abs/2005.07865) | [Research Gate](https://www.researchgate.net/publication/341423467_Attribute2Font_Creating_Fonts_You_Want_From_Attributes/comments) Supplementary Material: [link](paper/Siggraph2020_Attr2Font_Supplemental_Material.pdf) Video: [link](img/att2font_demo.mov) Code: [GitHub](https://github.com/hologerry/Attr2Font) ## Abstract Font design is now still considered as an exclusive privilege of professional designers, whose creativity is not possessed by existing software systems. Nevertheless, we also notice that most commercial font products are in fact manually designed by following specific requirements on some attributes of glyphs, such as italic, serif, cursive, width, angularity, etc. Inspired by this fact, we propose a novel model, Attribute2Font, to automatically create fonts by synthesizing visually pleasing glyph images according to user-specified attributes and their corresponding values. To the best of our knowledge, our model is the first one in the literature which is capable of generating glyph images in new font styles, instead of retrieving existing fonts, according to given values of specified font attributes. Specifically, Attribute2Font is trained to perform font style transfer between any two fonts conditioned on their attribute values. After training, our model can generate glyph images in accordance with an arbitrary set of font attribute values. Furthermore, a novel unit named Attribute Attention Module is designed to make those generated glyph images better embody the prominent font attributes. Considering that the annotations of font attribute values are extremely expensive to obtain, a semi-supervised learning scheme is also introduced to exploit a large number of unlabeled fonts. Experimental results demonstrate that our model achieves impressive performance on many tasks, such as creating glyph images in new font styles, editing existing fonts, interpolation among different fonts, etc. | 2,315 |
homangab/gradcem | ['model based reinforcement learning'] | ['Model-Predictive Control via Cross-Entropy and Gradient-Based Optimization'] | mpc/test_energy.py mpc/svgd.py mpc/grad.py experiments/D_comp/plot.py mpc/gradcem.py mpc/cem.py experiments/discont_comp/main.py setup.py experiments/discont_comp/plot.py main.py experiments/discont_comp/plot_num_obs.py experiments/D_comp/main.py run_mult run run_mult run CEM GradPlan GradCEMPlan SVGDPlan rbf_kernel squared_dist npy get_test_energy BatchRepulseCircle NavigateGTEnv get_test_energy2d_env RepulseCircle FuncMinGTEnv test_energy2d set_env reset_state forward rollout list std tqdm mean append array range run t view size exp squared_dist detach tensor | Model-Predictive Control via Cross-Entropy and Gradient-Based Optimization ====== Code accompanying the paper Model-Predictive Control via Cross-Entropy and Gradient-Based Optimization with authors [Homanga Bharadhwaj](https://homangab.github.io), Kevin Xie, and [Florian Shkurti](http://www.cs.toronto.edu/~florian/) (First two authors contributed equally). To be presented in [L4DC 2020](https://sites.google.com/berkeley.edu/l4dc/accepted-papers).  Requirements ------------ - Python 3 - [DeepMind Control Suite](https://github.com/deepmind/dm_control) - [Gym](https://gym.openai.com/) - [OpenCV Python](https://pypi.python.org/pypi/opencv-python) | 2,316 |
honeygupta/UW-Net | ['depth estimation'] | ['Unsupervised Single Image Underwater Depth Estimation'] | losses.py uwnet_datasets.py test.py utils.py layers.py create_uwnet_dataset.py main.py data_loader.py model.py evaluate_metrics.py create_list create_dataset _load_samples pear_coeff sq_sinv calculate_metrics lrelu general_deconv2d instance_norm general_conv2d lsgan_loss_discriminator tf_ms_ssim tf_ssim _tf_fspecial_gauss cycle_consistency_loss lsgan_loss_generator random_crop Data load_data main UWNet TransitionDown preact_conv discriminator_A discriminator_B DenseBlock get_outputs TransitionUp build_resnet_block denseNet main Data UWNet load_data resBlock upsample _phase_shift log10 PS append sort join listdir append shuffle create_list range read from_tensor_slices TextLineReader string_input_producer make_initializable_iterator decode_csv map get_next read_file decode_png decode_jpeg mean log pearsonr ravel join uint8 print sort astype mean append imread listdir array range len exp expand_dims constant conv2d reduce_mean _tf_fspecial_gauss constant squeeze tf_ssim reduce_prod reduce_mean stack append avg_pool range split append reader open randint min str UWNet train test bool makedirs conv2d relu conv2d relu conv2d range PS as_list reshape transpose concat split _phase_shift concat split constant log | ## Unsupervised Single Image Underwater Depth Estimation (UW-Net) [[Project]](http://www.ee.iitm.ac.in/comp_photolab/project-underwater.html) [[Paper]](https://arxiv.org/pdf/1905.10595.pdf) This repository contains the tensorflow implementation for UW-Net and includes the scripts to train and test the network. The code is written and maintained by [Honey Gupta](https://github.com/honeygupta). ## Getting Started ### Package requirements * The following packages are required to run the codes. Note: the version of the packages were the ones we used and are suggestive, not mandatory. * python = 3.6 * click = 7.0 * tensorflow-gpu = 1.11 | 2,317 |
hongchenphd/MHSA-Net | ['person re identification'] | ['MHSA-Net: Multi-Head Self-Attention Network for Occluded Person Re-Identification'] | config.py main_reid.py DefaultConfig train test IHTL init_dataset ImageData IDE SGD query DataLoader ResNetBuilder save_checkpoint Logger _state_dict ResNetEvaluator dataset save_dir cuda LiftedStructureLoss seed get_optim_policy Margin max_epoch Resnet Adam MHSA FDRT load_state_dict TrainTransform sum range state_dict manual_seed_all SummaryWriter format test start_epoch CrossEntropyLabelSmooth manual_seed is_available ACM join gallery evaluate print datatype cls_tripletTrainer best_rank _parse TestTransform margin num_train_pids pretrained_model TripletLoss adjust_lr format print eval dataset len | # MHSA-Net The trained model in the Occluded-DukeMTMC https://pan.baidu.com/s/1HxLg0EvcBpAfSN_eW3oK7Q passwd: wip9 The trained model in the P-DukeMTMC-reID https://pan.baidu.com/s/11a_BQOEeDVGMSDCCu8iCOA passwd:844u The trained model in the CUHK03-NP/Labeled https://pan.baidu.com/s/1yudTzvDrsJQGl-mo3KvEEQ passwd:c635 The trained model in the CUHK03-NP/Detected https://pan.baidu.com/s/1Wfu7YC5SrFVP5MjM0Yx3Jw passwd: mrto | 2,318 |
hongwang600/DocRed | ['relation extraction'] | ['Fine-tune Bert for DocRED with Two-step Process'] | train.py test_sp.py models/bert.py evaluation.py config/Config.py train_sp.py test.py gen_data.py models/__init__.py models/LSTM_SP.py config/EviConfig.py models/attention.py models/CNN3.py models/LSTM.py config/__init__.py models/BiLSTM.py models/ContextAware.py gen_train_facts sents_2_idx init Accuracy Config Accuracy EviConfig PositionwiseFeedForward clones LayerNorm SublayerConnection Encoder SimpleEncoder MultiHeadedAttention PositionalEncoding attention EncoderLayer Bert LockedDropout BiLSTM EncoderRNN BiAttention EncoderLSTM CNN3 BiAttention SelfAttention EncoderRNN LockedDropout EncoderLSTM ContextAware LockedDropout EncoderRNN BiAttention EncoderLSTM LSTM LockedDropout LSTM_SP EncoderRNN BiAttention EncoderLSTM load join dump list replace tuple set add exists open list len append range enumerate sents_2_idx save max subword_tokenize_to_ids open list add append range get dump set lower enumerate load join int print zeros len dropout size transpose matmul sqrt masked_fill softmax | ## Code cloned from https://github.com/thunlp/DocRED/tree/master/code # Baseline code ## Requirements and Installation python3 pytorch>=1.0 ``` pip3 install -r requirements.txt ``` ## preprocessing data Download metadata from [TsinghuaCloud](https://cloud.tsinghua.edu.cn/d/99e1c0805eb64736af95/) or [GoogleDrive](https://drive.google.com/drive/folders/1Ri3LIILKKBi3aBJjUVCOBpGX5PpONHRK) for baseline method and put them into prepro_data folder. | 2,319 |
hongweilibran/wmh_ibbmTum | ['data augmentation'] | ['Fully Convolutional Network Ensembles for White Matter Hyperintensities Segmentation in MR Images'] | evaluation.py train_leave_one_out.py submission_sysu_.py test_leave_one_out.py getDSC getAVD getLesionDetection do getImages getResultFilename getHausdorff Utrecht_preprocessing conv_bn_relu GE3T_preprocessing dice_coef_loss get_unet dice_coef_for_training main Utrecht_postprocessing GE3T_postprocessing test_leave_one_out get_crop_shape augmentation train_leave_one_out conv_bn_relu dice_coef_loss get_unet dice_coef_for_training main get_crop_shape join getDSC getAVD print getLesionDetection getImages getResultFilename getHausdorff Mask CopyInformation ReadImage BinaryThreshold join listdir ratio flatten astype apply_along_axis BinaryErode Subtract GetArrayFromImage TransformIndexToPhysicalPoint getDistancesFromAtoB float len ConnectedComponentImageFilter Multiply GetArrayFromImage unique SetFullyConnected Cast sitkUInt32 Execute StatisticsImageFilter Execute flatten sum value conv_bn_relu concatenate compile Model Input get_crop_shape ndarray concatenate float32 binary_fill_holes range shape ndarray ndarray concatenate min float32 binary_fill_holes range shape ndarray getDSC getAVD str GE3T_preprocessing get_unet getLesionDetection predict copy WriteImage ReadImage GetArrayFromImage load_weights getImages listdir getHausdorff join Utrecht_preprocessing sort GetImageFromArray Utrecht_postprocessing GE3T_postprocessing str ndarray print save test_leave_one_out range transform_matrix_offset_center pi dot uniform array apply_transform str list augmentation concatenate print get_unet delete copy shape save_weights mkdir zeros range fit load train_leave_one_out filterwarnings | # Instructions for running the winning method in MICCAI 2017 WMH segmentation challenge Thanks to Weiqing, python3 verison is here: https://github.com/FourierX9/wmh_ibbmTum. ### Testing your cases An easy-to-use demo code could be downloaded here: https://drive.google.com/file/d/1tjk8CXjGYeddbaPCc1P5r-_ACUFcMut4/view?usp=sharing . It supports single modality (FLAIR) and two-modality (FLAIR and T1) as the the input. The detailed instructions are in **ReadMe** inside. Please have a look at it. Simply, just run: ``` python test_your_data.py ``` The dockerfile submitted to the WMH challenge is available: ``` | 2,320 |
hordiales/transferencia-estilo-sms | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | sms.py stft.py visualizar.py extra/extractTempo.py build_sms_conv_matrix inv_build_sms_from_matrix SMS sms_synth_to_file sms_analysis_from_file logmag_stft_from_file reconstruccion_fase_griffin_lim melspectrogram_from_file comparar_3_espectrogramas espectrograma comparar_espectrogramas_grande plot_time_signal_from_array comparar_2_espectrogramas espectrograma_grande process_file blackman load N_FFT minSineDur freqDevOffset freqDevSlope w t H sineModelAnal write_wav asarray H sineModelSynth Ns zeros range zeros range load stft print log1p abs load stft print log1p melspectrogram abs exp random_sample zeros_like angle stft min pi shape range istft show subplot imshow title figure show subplot imshow title figure show subplot imshow title figure load stft amplitude_to_db title figure specshow abs load subplot show stft amplitude_to_db title figure specshow abs waveplot title figure timelength print Duration DCRemoval RhythmExtractor2013 offset_filter | # Transferencia de estilo entre audios Este trabajo tiene como objetivo aplicar en archivos de audio las técnicas de procesamiento con redes neuronales desarrolladas para la transferencia de estilo en imágenes. En particular, aquellas que son de reciente publicación y dentro de su arquitectura utilizan una o más capas de redes neuronales convolucionales (CNN). Para ello, se construyen representaciones de la señal audible en matrices de estructura similar a las que normalmente se utilizan para procesar imagenes. Se evaluan diferentes aproximaciones al problema utlizando técnicas de análisis/síntesis como la transformada de tiempo corto de Fourier (STFT) y la descomposición de la señal de entrada en sinusoides y residuo, derivada del Spectral Modelling Synthesis (SMS), históricamente utilizado en señales de voz. Aunque la definición de estilo puede ser subjetiva, se ensayan algunas aproximaciones en su definición y reconocimiento. Para esto, se desarrollan e implementan diferentes programas en Python utilizando el framework TensorFlow, pensado para construir y entrenar redes neuronales. El resultado es un enfoque diferente para la aplicación de efectos digitales en señales de audio. Ver demo online: [Demo: Tangos 'El Choclo' y 'Adios Nonino'](https://nbviewer.jupyter.org/github/hordiales/transferencia-estilo-sms/blob/master/Demo/Demo.ipynb). # Arquitectura CNN  # Dependencias Ver [INSTALL.md](INSTALL.md) para configurar las dependencias. # Usar $ git clone https://github.com/hordiales/transferencia-estilo-sms $ cd transferencia-estilo-cnn | 2,321 |
horizon-research/Efficient-Deep-Learning-for-Point-Clouds | ['autonomous driving'] | ['Mesorasi: Architecture Support for Point Cloud Analytics via Delayed-Aggregation'] | Networks/frustum-pointnets/models_limited/tf_ops/grouping/tf_grouping.py Networks/pointnet2/utils/show3d_balls.py Networks/DensePoint/utils-baseline/linalg_utils.py Networks/frustum-pointnets/train/test.py Networks/ldgcnn/train.py Networks/pointnet2/utils/pc_util.py Networks/DensePoint/utils-baseline/pointnet2_modules.py Networks/pointnet2/utils-baseline/show3d_balls.py Networks/DensePoint/utils/pointnet2_utils.py Networks/frustum-pointnets/models_baseline/frustum_pointnets_v2.py Networks/pointnet2/scannet/preprocessing/scannet_util.py Networks/dgcnn/models/transform_nets.py Networks/pointnet2/utils/pointnet_util.py Networks/pointnet2/models-baseline/pointnet2_cls_ssg.py Networks/frustum-pointnets/train/provider_limited.py Networks/pointnet2/utils/provider.py Networks/pointnet2/modelnet_dataset.py Networks/frustum-pointnets/train/log_v2_limited/train.py Networks/pointnet2/part_seg/part_dataset_all_normal.py Networks/dgcnn/utils/pc_util.py Networks/frustum-pointnets/models/tf_ops/sampling/tf_sampling.py Networks/frustum-pointnets/train/train.py Networks/frustum-pointnets/models/tf_ops/grouping/tf_grouping.py Networks/dgcnn/train.py Networks/DensePoint/train.py Networks/dgcnn/evaluate.py Networks/DensePoint/utils-baseline/build_ffi.py Networks/dgcnn/utils/plyfile.py Networks/frustum-pointnets/models_baseline/tf_ops/3d_interpolation/visu_interpolation.py Networks/pointnet2/utils-baseline/provider.py Networks/pointnet2/evaluate.py Networks/dgcnn/part_seg/part_seg_model.py Networks/dgcnn/utils-baseline/pc_util.py launcher.py Networks/pointnet2/scannet/preprocessing/demo.py Networks/dgcnn/utils-baseline/plyfile.py Networks/frustum-pointnets/models_limited/tf_ops/3d_interpolation/tf_interpolate.py Networks/frustum-pointnets/models_limited/tf_ops/grouping/test_knn.py Networks/pointnet2/models-limited/pointnet2_cls_ssg.py Networks/dgcnn/part_seg/part_seg_model_baseline.py Networks/frustum-pointnets/kitti/prepare_data.py Networks/pointnet2/scannet/scene_util.py Networks/DensePoint/utils-baseline/pointnet2_utils.py Networks/DensePoint/data/__init__.py Networks/dgcnn/models-baseline/transform_nets.py Networks/dgcnn/utils/data_prep_util.py Networks/frustum-pointnets/train/log/train.py Networks/DensePoint/utils-baseline/pytorch_utils/pytorch_utils.py Networks/ldgcnn/utils/pointfly.py Networks/frustum-pointnets/models/tf_ops/grouping/tf_grouping_op_test.py Networks/pointnet2/download.py Networks/ldgcnn/models/ldgcnn.py Networks/dgcnn/part_seg/evaluate.py Networks/dgcnn/part_seg/train-baseline.py Networks/ldgcnn/log_new/train.py Networks/pointnet2/tf_ops/grouping/tf_grouping.py Networks/pointnet2/models-baseline/pointnet2_part_seg.py PowerMeasurement/power_measurement.py Networks/frustum-pointnets/train/parser.py Networks/dgcnn/train-baseline.py Networks/frustum-pointnets/models/tf_ops/grouping/test_knn.py Networks/DensePoint/utils/_ext/pointnet2/__init__.py Networks/pointnet2/models/pointnet2_part_seg.py Networks/ldgcnn/VisionProcess/FileIO.py Networks/DensePoint/evaluate-baseline.py Networks/DensePoint/utils-baseline/_ext/pointnet2/__init__.py Networks/DensePoint/compile.py Networks/frustum-pointnets/models_baseline/tf_ops/3d_interpolation/tf_interpolate_op_test.py Networks/pointnet2/part_seg/evaluate.py Networks/frustum-pointnets/models_limited/frustum_pointnets_v2.py Networks/frustum-pointnets/models/tf_ops/3d_interpolation/tf_interpolate.py Networks/dgcnn/part_seg/transform_nets.py Networks/frustum-pointnets/models_limited/tf_ops/3d_interpolation/tf_interpolate_op_test.py Networks/dgcnn/models/dgcnn.py Networks/pointnet2/part_seg/evaluate-baseline.py Networks/dgcnn/part_seg/train.py Networks/frustum-pointnets/models_limited/tf_ops/3d_interpolation/visu_interpolation.py Networks/frustum-pointnets/models_baseline/tf_ops/grouping/tf_grouping.py Networks/frustum-pointnets/train/log_v2_baseline/train.py Networks/pointnet2/scannet/scannet_dataset.py Networks/frustum-pointnets/models_baseline/tf_ops/sampling/tf_sampling.py Networks/pointnet2/utils-baseline/tf_util_limited.py Networks/pointnet2/scannet/train.py Networks/DensePoint/data/ModelNet40Loader.py Networks/pointnet2/evaluate-limited.py Networks/frustum-pointnets/models_limited/model_util.py Networks/DensePoint/utils/pytorch_utils/pytorch_utils.py Networks/dgcnn/models-baseline/dgcnn.py Networks/DensePoint/train-baseline.py Networks/dgcnn/part_seg/transform_nets_baseline.py Networks/frustum-pointnets/models/tf_ops/3d_interpolation/tf_interpolate_op_test.py Networks/pointnet2/tf_ops/3d_interpolation/visu_interpolation.py Networks/dgcnn/part_seg/evaluate-baseline.py Networks/pointnet2/part_seg/part_dataset.py Networks/DensePoint/utils-baseline/pytorch_utils/__init__.py Networks/frustum-pointnets/train/test_runtime.py Networks/frustum-pointnets/kitti/kitti_util.py Networks/DensePoint/utils/pytorch_utils/__init__.py Networks/frustum-pointnets/train/train_util.py Networks/pointnet2/modelnet_h5_dataset.py Networks/pointnet2/part_seg/train_one_hot.py Networks/pointnet2/tf_ops/sampling/tf_sampling.py Networks/frustum-pointnets/models_baseline/model_util.py Networks/frustum-pointnets/models_baseline/pointnet_util.py Networks/frustum-pointnets/train/provider.py Networks/frustum-pointnets/models_baseline/tf_util.py Networks/frustum-pointnets/train/log_v1/train.py Networks/DensePoint/utils/pointnet2_modules.py Networks/ldgcnn/provider.py Networks/pointnet2/utils-baseline/tf_util.py Networks/dgcnn/utils/tf_util.py Networks/frustum-pointnets/train/log_v2_baseline/frustum_pointnets_v2.py Networks/pointnet2/tf_ops/grouping/tf_grouping_op_test.py Networks/frustum-pointnets/models_limited/pointnet_util.py Networks/pointnet2/part_seg/train-baseline.py Networks/pointnet2/utils-baseline/pointnet_util.py Networks/DensePoint/models-baseline/__init__.py Networks/dgcnn/utils/eulerangles.py Networks/pointnet2/tf_ops/3d_interpolation/tf_interpolate_op_test.py Networks/frustum-pointnets/kitti/kitti_object.py Networks/ldgcnn/utils/data_prep_util.py Networks/frustum-pointnets/models_limited/frustum_pointnets_v1.py Networks/pointnet2/utils/tf_util.py Networks/frustum-pointnets/models/model_util.py Networks/pointnet2/tf_ops/3d_interpolation/tf_interpolate.py Networks/ldgcnn/evaluate.py Networks/dgcnn/utils-baseline/eulerangles.py Networks/DensePoint/utils/build_ffi.py Networks/frustum-pointnets/models_baseline/frustum_pointnets_v1.py Networks/pointnet2/scannet/pc_util.py Networks/pointnet2/scannet/preprocessing/fetch_label_names.py Networks/pointnet2/compile.py Networks/DensePoint/models/__init__.py Networks/ldgcnn/download.py Networks/pointnet2/models/pointnet2_cls_ssg.py Networks/DensePoint/evaluate.py Networks/ldgcnn/log_new/ldgcnn.py Networks/pointnet2/part_seg/train.py Networks/ldgcnn/VisionProcess/PlotClass.py Networks/frustum-pointnets/train/box_util.py Networks/frustum-pointnets/models/pointnet_util.py Networks/frustum-pointnets/models_limited/tf_util.py Networks/frustum-pointnets/models/tf_util.py Networks/pointnet2/part_seg/train-limited.py Networks/dgcnn/evaluate-baseline.py Networks/dgcnn/provider.py Networks/frustum-pointnets/models_baseline/tf_ops/grouping/tf_grouping_op_test.py Networks/pointnet2/train-limited.py Networks/frustum-pointnets/train/log_v2/frustum_pointnets_v2.py Networks/dgcnn/utils-baseline/tf_util.py Networks/pointnet2/train-baseline.py Networks/pointnet2/scannet/preprocessing/collect_scannet_scenes.py Networks/frustum-pointnets/train/log_v2_limited/frustum_pointnets_v2.py Networks/ldgcnn/models/ldgcnn_classifier.py Networks/pointnet2/evaluate-baseline.py Networks/frustum-pointnets/mayavi/viz_util.py Networks/pointnet2/utils-baseline/pc_util.py Networks/ldgcnn/tsne_visualization.py Networks/frustum-pointnets/models/tf_ops/3d_interpolation/visu_interpolation.py Networks/pointnet2/models-limited/pointnet2_part_seg.py Networks/ldgcnn/utils/eulerangles.py Networks/pointnet2/part_seg/download.py Networks/DensePoint/download.py Networks/DensePoint/models/densepoint_cls_L6_k24_g2.py Networks/dgcnn/part_seg/download.py Networks/frustum-pointnets/train/log_v2/train.py Networks/frustum-pointnets/compile.py Networks/DensePoint/utils/linalg_utils.py Networks/frustum-pointnets/models_baseline/tf_ops/grouping/test_knn.py Networks/dgcnn/utils-baseline/data_prep_util.py Networks/frustum-pointnets/models/frustum_pointnets_v2.py Networks/ldgcnn/utils/tf_util.py Networks/frustum-pointnets/download.py Networks/DensePoint/data/data_utils.py Networks/pointnet2/train.py Networks/frustum-pointnets/train/log/frustum_pointnets_v1.py Networks/pointnet2/part_seg/evaluate-limited.py Networks/ldgcnn/models/ldgcnn_baseline.py Networks/frustum-pointnets/models_limited/tf_ops/grouping/tf_grouping_op_test.py Networks/frustum-pointnets/train/log_v1/frustum_pointnets_v1.py Networks/frustum-pointnets/train/provider_baseline.py Networks/frustum-pointnets/models_baseline/tf_ops/3d_interpolation/tf_interpolate.py Networks/ldgcnn/utils/plyfile.py Networks/dgcnn/download.py Networks/frustum-pointnets/models_limited/tf_ops/sampling/tf_sampling.py Networks/ldgcnn/log_new/ldgcnn_classifier.py Networks/frustum-pointnets/mayavi/test_drawline.py Networks/DensePoint/models-baseline/densepoint_cls_L6_k24_g2.py Networks/pointnet2/utils-baseline/pointnet_util_limited.py main main main train validate main train validate PointcloudRotatebyAngle PointcloudToTensor PointcloudRandomInputDropout PointcloudScaleAndTranslate PointcloudTranslate PointcloudJitter PointcloudScale angle_axis ModelNet40Cls _load_data_file _get_data_files DensePoint DensePoint parse_args clean build pdist2 pdist2_slow PointnetFPModule PointnetSAModuleMSG_new _PointnetSAModuleBase PointnetSAModule _PointnetSAModuleBase_new PointnetSAModuleMSG QueryAndGroup_new GroupAll FurthestPointSampling ThreeInterpolate GroupingOperation BallQuery QueryAndGroup ThreeNN GatherOperation _ConvBase variable_size_collate BNMomentumScheduler GloAvgConv EnhancedPointConv save_checkpoint TrainValSplitter group_model_params _FeatureDropoutNoScaling BatchNorm3d Conv3d _BNBase PointConv CrossValSplitter BatchNorm1d FC checkpoint_state Conv1d set_bn_momentum_default SharedMLP load_checkpoint _DropoutNoScaling Conv2d BatchNorm2d _import_symbols parse_args clean build pdist2 pdist2_slow PointnetFPModule _PointnetSAModuleBase PointnetSAModule PointnetSAModuleMSG GroupAll FurthestPointSampling ThreeInterpolate GroupingOperation BallQuery QueryAndGroup ThreeNN GatherOperation _ConvBase variable_size_collate BNMomentumScheduler GloAvgConv EnhancedPointConv save_checkpoint TrainValSplitter group_model_params _FeatureDropoutNoScaling BatchNorm3d Conv3d _BNBase PointConv CrossValSplitter BatchNorm1d FC checkpoint_state Conv1d set_bn_momentum_default SharedMLP load_checkpoint _DropoutNoScaling Conv2d BatchNorm2d _import_symbols eval_one_epoch log_string evaluate eval_one_epoch log_string evaluate rotate_point_cloud load_h5_data_label_seg loadDataFile getDataFiles load_h5 rotate_point_cloud_by_angle shuffle_data jitter_point_cloud getDataFilesShapeNet rotate_perturbation_point_cloud shift_point_cloud random_scale_point_cloud get_learning_rate eval_one_epoch log_string train_one_epoch train get_bn_decay get_learning_rate eval_one_epoch log_string train_one_epoch train get_bn_decay print get_model get_loss placeholder_inputs print input_transform_net get_model get_loss placeholder_inputs input_transform_net pc_normalize convert_label_to_one_hot printout load_pts_seg_files output_color_point_cloud_red_blue placeholder_inputs pc_augment_to_point_num predict output_color_point_cloud pc_normalize convert_label_to_one_hot printout load_pts_seg_files output_color_point_cloud_red_blue placeholder_inputs pc_augment_to_point_num predict output_color_point_cloud print get_model get_loss get_model get_loss average_gradients train convert_label_to_one_hot printout average_gradients train convert_label_to_one_hot printout print input_transform_net input_transform_net load_ply_normal pad_arr_rows batch_mkdir save_h5_data_label_normal load_h5_data_label_normal load_h5_data_label_seg get_sampling_command load_h5 get_category_names save_h5 load_ply_data get_obj_filenames export_ply quat2euler euler2quat mat2euler angle_axis2euler euler2angle_axis euler2mat write_ply pyplot_draw_point_cloud draw_point_cloud read_ply point_cloud_three_views_demo point_cloud_to_volume pyplot_draw_volume point_cloud_to_volume_batch point_cloud_three_views volume_to_point_cloud _open_stream _lookup_type PlyData _split_line PlyProperty PlyParseError make2d PlyListProperty PlyElement conv2d_transpose pairwise_distance fully_connected conv3d get_edge_feature max_pool3d batch_norm_template conv2d conv1d _variable_with_weight_decay batch_norm_for_conv1d dropout batch_norm_for_conv2d knn avg_pool3d max_pool2d _variable_on_cpu print avg_pool2d batch_norm_dist_template batch_norm_for_fc batch_norm_for_conv3d load_ply_normal pad_arr_rows batch_mkdir save_h5_data_label_normal load_h5_data_label_normal load_h5_data_label_seg get_sampling_command load_h5 get_category_names save_h5 load_ply_data get_obj_filenames export_ply quat2euler euler2quat mat2euler angle_axis2euler euler2angle_axis euler2mat write_ply pyplot_draw_point_cloud draw_point_cloud read_ply point_cloud_three_views_demo point_cloud_to_volume pyplot_draw_volume point_cloud_to_volume_batch point_cloud_three_views volume_to_point_cloud _open_stream _lookup_type PlyData _split_line PlyProperty PlyParseError make2d PlyListProperty PlyElement conv2d_transpose pairwise_distance fully_connected conv3d get_edge_feature max_pool3d batch_norm_template conv2d conv1d _variable_with_weight_decay batch_norm_for_conv1d dropout batch_norm_for_conv2d knn avg_pool3d max_pool2d _variable_on_cpu avg_pool2d batch_norm_dist_template batch_norm_for_fc batch_norm_for_conv3d get_lidar_in_image_fov show_lidar_with_boxes dataset_viz viz_kitti_video kitti_object_video show_lidar_on_image kitti_object show_image_with_boxes load_velo_scan project_to_image compute_box_3d Object3d rotz inverse_rigid_trans roty rotx Calibration draw_projected_box3d load_image transform_from_rot_trans read_label compute_orientation_3d read_det_file in_hull extract_pc_in_box3d random_shift_box2d extract_pc_in_box2d extract_frustum_data_rgb_detection write_2d_rgb_detection demo extract_frustum_data get_box3d_dim_statistics test_plot3d draw_gt_boxes3d draw_lidar draw_lidar_simple get_model get_instance_seg_v2_net get_3d_box_estimation_v2_net point_cloud_masking get_loss huber_loss placeholder_inputs parse_output_to_tensors get_box3d_corners tf_gather_object_pc get_box3d_corners_helper get_center_regression_net sample_and_group pointnet_sa_module pointnet_fp_module new_group_point pointnet_sa_module_bkup pointnet_sa_module_msg pointnet_sa_module_msg_bkup sample_and_group_all batch_norm_template batch_norm_for_conv1d conv2d_transpose dropout fully_connected conv3d batch_norm_for_conv2d batch_norm_for_fc batch_norm_for_conv3d avg_pool2d conv2d conv1d avg_pool3d max_pool3d max_pool2d _variable_with_weight_decay batch_norm_template_unused _variable_on_cpu three_nn three_interpolate _three_interpolate_grad GroupPointTest fun query_ball_point group_point select_top_k _group_point_grad knn_point GroupPointTest farthest_point_sample gather_point _gather_point_grad prob_sample get_instance_seg_v1_net get_model get_3d_box_estimation_v1_net get_model get_instance_seg_v2_net get_3d_box_estimation_v2_net point_cloud_masking get_loss huber_loss placeholder_inputs parse_output_to_tensors get_box3d_corners tf_gather_object_pc get_box3d_corners_helper get_center_regression_net sample_and_group pointnet_sa_module pointnet_fp_module new_group_point pointnet_sa_module_bkup pointnet_sa_module_msg sample_and_group_all batch_norm_template batch_norm_for_conv1d conv2d_transpose dropout fully_connected conv3d batch_norm_for_conv2d batch_norm_for_fc batch_norm_for_conv3d avg_pool2d conv2d conv1d avg_pool3d max_pool3d max_pool2d _variable_with_weight_decay batch_norm_template_unused _variable_on_cpu three_nn three_interpolate _three_interpolate_grad GroupPointTest fun query_ball_point group_point select_top_k _group_point_grad knn_point GroupPointTest farthest_point_sample gather_point _gather_point_grad prob_sample get_instance_seg_v1_net get_model get_3d_box_estimation_v1_net get_model get_instance_seg_v2_net get_3d_box_estimation_v2_net point_cloud_masking get_loss huber_loss placeholder_inputs parse_output_to_tensors get_box3d_corners tf_gather_object_pc get_box3d_corners_helper get_center_regression_net sample_and_group pointnet_sa_module pointnet_fp_module new_group_point pointnet_sa_module_bkup pointnet_sa_module_msg pointnet_sa_module_msg_bkup sample_and_group_all batch_norm_template batch_norm_for_conv1d conv2d_transpose dropout fully_connected conv3d batch_norm_for_conv2d batch_norm_for_fc batch_norm_for_conv3d avg_pool2d conv2d conv1d avg_pool3d max_pool3d max_pool2d _variable_with_weight_decay batch_norm_template_unused _variable_on_cpu three_nn three_interpolate _three_interpolate_grad GroupPointTest fun query_ball_point group_point select_top_k _group_point_grad knn_point GroupPointTest farthest_point_sample gather_point _gather_point_grad prob_sample box2d_iou plot_polys is_clockwise poly_area convex_hull_intersection box3d_iou polygon_clip get_iou box3d_vol parse_timeline dict2list class2size from_prediction_to_label_format compute_box3d_iou angle2class rotate_pc_along_y size2class FrustumDataset get_3d_box class2angle class2size from_prediction_to_label_format compute_box3d_iou angle2class rotate_pc_along_y size2class FrustumDataset get_3d_box class2angle class2size from_prediction_to_label_format compute_box3d_iou angle2class rotate_pc_along_y size2class FrustumDataset get_3d_box class2angle write_detection_results test_from_rgb_detection test softmax inference fill_files get_session_and_ops get_learning_rate eval_one_epoch log_string train_one_epoch train get_bn_decay get_batch_from_rgb_detection get_batch get_instance_seg_v1_net get_model get_3d_box_estimation_v1_net get_learning_rate eval_one_epoch log_string train_one_epoch train get_bn_decay get_instance_seg_v1_net get_model get_3d_box_estimation_v1_net get_learning_rate eval_one_epoch log_string train_one_epoch train get_bn_decay get_model get_instance_seg_v2_net get_3d_box_estimation_v2_net get_learning_rate eval_one_epoch log_string train_one_epoch train get_bn_decay get_model get_instance_seg_v2_net get_3d_box_estimation_v2_net get_learning_rate eval_one_epoch log_string train_one_epoch train get_bn_decay get_model get_instance_seg_v2_net get_3d_box_estimation_v2_net get_learning_rate eval_one_epoch log_string train_one_epoch train get_bn_decay log_string rotate_point_cloud load_h5_data_label_seg loadDataFile getDataFiles load_h5 rotate_point_cloud_by_angle shuffle_data jitter_point_cloud rotate_perturbation_point_cloud shift_point_cloud random_scale_point_cloud train_classifier train_classifier_one_epoch get_learning_rate eval_one_epoch log_string save_global_feature train_one_epoch eval_classifier_one_epoch train get_bn_decay calc_ldgcnn_feature get_model get_loss placeholder_inputs get_model get_loss placeholder_inputs train_classifier train_classifier_one_epoch get_learning_rate eval_one_epoch log_string save_global_feature train_one_epoch eval_classifier_one_epoch train get_bn_decay calc_ldgcnn_feature get_model get_loss placeholder_inputs calc_ldgcnn_feature get_model get_loss placeholder_inputs get_model get_loss placeholder_inputs load_ply_normal pad_arr_rows batch_mkdir save_h5_data_label_normal load_h5_data_label_normal load_h5_data_label_seg get_sampling_command load_h5 get_category_names save_h5 load_ply_data get_obj_filenames export_ply quat2euler euler2quat mat2euler angle_axis2euler euler2angle_axis euler2mat _open_stream _lookup_type PlyData _split_line PlyProperty PlyParseError make2d PlyListProperty PlyElement find_duplicate_columns scaling_factor knn_indices_general compute_curvature curvature_based_sample get_indices sort_points gauss_clip find_farthest_points batch_distance_matrix_general random_choice_2d depthwise_conv2d compute_determinant uniform batch_normalization conv2d feature_probability_sampling get_xforms knn_indices compute_eigenvals inverse_density_sampling rotation_angle distance_matrix augment dense find_farthest_points_batch batch_distance_matrix prepare_for_unique_top_k calc_dist separable_conv2d group_norm_for_conv conv2d_transpose group_norm_for_fc conv2d_spy pairwise_distance fully_connected conv3d get_edge_feature max_pool3d get_edge_group_feature batch_norm_template knn_with_RBF_dist conv2d conv1d _variable_with_weight_decay batch_norm_for_conv1d get_edge_cross_feature dropout batch_norm_for_conv2d get_edge_times_feature topk_pool knn batch_norm_template_multiGPU avg_pool3d max_pool2d spiderConv _variable_on_cpu get_new_edge_feature knn_random avg_pool2d get_triangle_edge_feature batch_norm_dist_template batch_norm_for_fc batch_norm_for_conv3d FileIO PlotClass eval_one_epoch log_string evaluate pc_normalize ModelNetDataset loadDataFile getDataFiles load_h5 shuffle_data ModelNetH5Dataset get_learning_rate eval_one_epoch log_string train_one_epoch train get_bn_decay get_learning_rate eval_one_epoch log_string train_one_epoch train get_bn_decay get_learning_rate eval_one_epoch log_string train_one_epoch train get_bn_decay get_model get_loss placeholder_inputs get_model get_loss placeholder_inputs get_model get_loss placeholder_inputs get_model get_loss placeholder_inputs get_model get_loss placeholder_inputs get_model get_loss placeholder_inputs pc_normalize PartDataset pc_normalize PartNormalDataset get_batch get_learning_rate eval_one_epoch log_string train_one_epoch train get_bn_decay get_batch get_learning_rate eval_one_epoch log_string train_one_epoch train get_bn_decay get_batch get_learning_rate eval_one_epoch log_string train_one_epoch train get_bn_decay get_batch get_learning_rate eval_one_epoch log_string train_one_epoch train get_bn_decay virtual_scan cart2sph collect_one_scene_data_label log_string get_raw2scannet_label_map three_nn three_interpolate _three_interpolate_grad GroupPointTest fun query_ball_point group_point select_top_k _group_point_grad knn_point GroupPointTest farthest_point_sample gather_point _gather_point_grad prob_sample point_cloud_to_volume_v2 write_ply pyplot_draw_point_cloud write_ply_color draw_point_cloud read_ply point_cloud_to_volume_v2_batch point_cloud_to_image_batch point_cloud_three_views_demo point_cloud_to_volume pyplot_draw_volume point_cloud_to_volume_batch point_cloud_three_views volume_to_point_cloud point_cloud_to_image sample_and_group pointnet_sa_module pointnet_fp_module pointnet_sa_module_msg sample_and_group_all rotate_point_cloud_by_angle_with_normal rotate_point_cloud shuffle_points rotate_point_cloud_with_normal loadDataFile getDataFiles load_h5 rotate_point_cloud_z shuffle_data rotate_perturbation_point_cloud_with_normal rotate_point_cloud_by_angle rotate_perturbation_point_cloud jitter_point_cloud shift_point_cloud random_scale_point_cloud random_point_dropout onmouse showpoints batch_norm_template batch_norm_for_conv1d conv2d_transpose dropout fully_connected conv3d batch_norm_for_conv2d batch_norm_for_fc batch_norm_for_conv3d avg_pool2d conv2d conv1d avg_pool3d max_pool3d max_pool2d _variable_with_weight_decay batch_norm_template_unused _variable_on_cpu point_cloud_to_volume_v2 write_ply pyplot_draw_point_cloud write_ply_color draw_point_cloud read_ply point_cloud_to_volume_v2_batch point_cloud_to_image_batch point_cloud_three_views_demo point_cloud_to_volume pyplot_draw_volume point_cloud_to_volume_batch point_cloud_three_views volume_to_point_cloud point_cloud_to_image sample_and_group pointnet_sa_module pointnet_fp_module pointnet_sa_module_msg sample_and_group_all rotate_point_cloud_by_angle_with_normal rotate_point_cloud shuffle_points rotate_point_cloud_with_normal loadDataFile getDataFiles load_h5 rotate_point_cloud_z shuffle_data rotate_perturbation_point_cloud_with_normal rotate_point_cloud_by_angle rotate_perturbation_point_cloud jitter_point_cloud shift_point_cloud random_scale_point_cloud random_point_dropout onmouse showpoints batch_norm_template batch_norm_for_conv1d conv2d_transpose dropout fully_connected conv3d batch_norm_for_conv2d batch_norm_for_fc batch_norm_for_conv3d avg_pool2d conv2d conv1d avg_pool3d max_pool3d max_pool2d _variable_with_weight_decay batch_norm_template_unused _variable_on_cpu batch_norm_template batch_norm_for_conv1d conv2d_transpose dropout fully_connected conv3d batch_norm_for_conv2d batch_norm_for_fc batch_norm_for_conv3d avg_pool2d conv2d conv1d avg_pool3d max_pool3d max_pool2d _variable_with_weight_decay batch_norm_template_unused _variable_on_cpu data ModelNet40Cls DataLoader cuda max list DensePoint view numel load_state_dict append parse_args sum range cat Compose eval PointcloudScale setattr checkpoint enumerate load items time print contiguous randint batch_size LambdaLR BNMomentumScheduler len Adam CrossEntropyLoss save_path parameters train makedirs data validate view model criterion backward contiguous print zero_grad epochs PointcloudScaleAndTranslate randint step cuda range enumerate data model save max cuda view numel append sum cat state_dict eval enumerate criterion print contiguous clone randint train norm outer from_numpy eye array File add_argument add_mutually_exclusive_group ArgumentParser set_defaults objs create_extension rmtree join sum transpose unsqueeze zeros size dist range append named_parameters DataParallel isinstance state_dict copyfile format save load format print load_state_dict isfile dir _wrap_function getattr append callable print write flush restore time print eval_one_epoch log_string ConfigProto Session pi rotate_point_cloud_by_angle argmax point_cloud_three_views run open str squeeze shape sum range imsave log_string FileWriter close mean float join print graph add_graph write loadDataFile zeros array len arange shuffle len reshape cos pi dot shape uniform sin zeros array range reshape cos dot shape sin zeros array range randn reshape dot shape zeros range array clip shape clip randn shape uniform range shape uniform range File File exponential_decay maximum minimum exponential_decay arange jitter_point_cloud random_scale_point_cloud argmax run str squeeze shift_point_cloud sum range log_string shuffle rotate_perturbation_point_cloud float rotate_point_cloud loadDataFile shuffle_data add_summary len int32 float32 placeholder value dropout print reshape concat reduce_max fully_connected shape conv2d expand_dims reduce_mean one_hot softmax_cross_entropy value print reshape fully_connected reduce_max conv2d shape max_pool2d expand_dims pairwise_distance matmul knn get_edge_feature print write mean sqrt sum max array concatenate zeros range ConfigProto Saver tile max_pool2d argmax sparse_softmax_cross_entropy_with_logits concat reduce_mean zip append expand_dims concat zeros PlyData write range join print join len join mkdir File close create_dataset File close create_dataset File read array read array append array cos sin eps asarray atan2 sqrt flat cos sin angle_axis2mat squeeze point_cloud_to_volume flatten append expand_dims range zeros float astype append vstack array range data read array write array describe int exp abs transpose min mean sqrt argsort round argwhere zeros sum max range euler2mat concatenate draw_point_cloud fromarray uint8 read_ply save point_cloud_three_views set_xlabel add_subplot scatter set_ylabel figure set_zlabel pyplot_draw_point_cloud volume_to_point_cloud append split dtype len property hasattr property property property multiply add_to_collection xavier_initializer _variable_on_cpu l2_loss truncated_normal_initializer squeeze transpose square reduce_sum matmul expand_dims top_k get_shape value reshape squeeze concat tile gather expand_dims range get_image join show get_lidar print project_velo_to_rect kitti_object_video eval draw_lidar input range len show P compute_box_3d copy rectangle draw_projected_box3d project_velo_to_image plot3d show P get_lidar_in_image_fov print compute_box_3d draw_lidar project_rect_to_velo figure draw_gt_boxes3d compute_orientation_3d show get_lidar_in_image_fov project_velo_to_rect get_cmap range circle get_image join get_label_objects get_calibration COLOR_BGR2RGB print show_lidar_with_boxes len shape eval input cvtColor range print_object kitti_object show_image_with_boxes cos sin cos sin cos sin reshape dot transpose zeros_like fromfile reshape print transpose hstack dot project_to_image ry transpose h w dot roty vstack any l project_to_image ry transpose dot roty any array line astype int32 CV_AA range Delaunay in_hull zeros in_hull P zeros_like show_lidar_with_boxes draw_lidar show project_image_to_velo extract_pc_in_box3d COLOR_BGR2RGB shape show_lidar_on_image input print_object show_image_with_boxes get_image project_velo_to_rect eval draw_gt_boxes3d kitti_object get_calibration join get_label_objects get_lidar_in_image_fov print compute_box_3d project_rect_to_velo figure cvtColor random zeros_like P project_image_to_rect points3d ry extract_pc_in_box3d shape append input range get_image get_lidar random_shift_box2d arctan2 project_velo_to_rect eval box2d float type kitti_object get_calibration join get_label_objects get_lidar_in_image_fov print compute_box_3d figure zeros array len get_calibration join get_label_objects ry print append type array range kitti_object len int rstrip open append float array split zeros_like project_image_to_rect points3d shape append input range get_image read_det_file get_lidar arctan2 project_velo_to_rect eval kitti_object get_calibration join get_lidar_in_image_fov print figure zeros array len join read_det_file write close open mkdir append range kitti_object len plot3d arange cos pi sin plot3d view points3d figure array plot3d view points3d figure array plot3d text3d range len pointnet_fp_module pointnet_sa_module dropout slice concat conv1d pointnet_sa_module_msg pointnet_sa_module value reshape concat fully_connected point_cloud_masking get_3d_box_estimation_v2_net get_instance_seg_v2_net parse_output_to_tensors get_center_regression_net gather_nd int32 py_func value slice ones concat transpose cos matmul stack sin zeros constant value arange reshape pi tile expand_dims get_box3d_corners_helper minimum abs value constant slice reshape pi expand_dims to_float value slice squeeze concat maximum reduce_sum set_shape tile tf_gather_object_pc value fully_connected squeeze concat conv2d max_pool2d expand_dims to_float minimum norm constant arange pi reduce_sum add_to_collection huber_loss tile get_box3d_corners expand_dims get_box3d_corners_helper scalar get_shape value reshape gather get_shape value query_ball_point new_group_point group_point print gather_point farthest_point_sample knn_point constant value reshape concat tile expand_dims value print reshape slice reduce_sum select_top_k tile value dropout concat squeeze conv2d tile max_pool2d expand_dims value fully_connected squeeze concat conv2d max_pool2d expand_dims get_instance_seg_v1_net get_3d_box_estimation_v1_net concat append computeIntersection inside polygon_clip ConvexHull sqrt sum poly_area min convex_hull_intersection max box3d_vol float min max append array subplots Polygon range len transpose cos dot sin array int float pi float pi dot transpose roty vstack class2size get_3d_box vstack box3d_iou class2angle append argmax array range class2size squeeze class2angle shape exp max print timer zeros range run join from_prediction_to_label_format write close open mkdir append range len join close open int arange get_batch get_session_and_ops print min stdev mean inference append zeros range len get_batch arange print len inference range get_session_and_ops int get_batch now arange get_batch now add_summary zeros range one_hot zeros range one_hot join restore str model print concatenate squeeze log_string loadDataFile shape eval array range len sum concatenate squeeze len loadDataFile shuffle_data add_summary argmax range run sum concatenate squeeze len loadDataFile mean add_summary float argmax array range run pairwise_distance print concat reduce_max squeeze knn conv2d get_edge_feature expand_dims calc_ldgcnn_feature squeeze isinstance concatenate min choice randrange append expand_dims full range min max gauss list list scaling_factor rotation_angle range empty diag euler2mat shape clip_by_value random_normal matmul transpose matmul reduce_sum transpose matmul reduce_sum transpose matmul reduce_sum fill range unique int32 py_func batch_distance_matrix reshape concat prepare_for_unique_top_k shape top_k tile range reshape batch_distance_matrix_general concat prepare_for_unique_top_k shape top_k tile range norm constant print subtract concat reduce_max exit reshape reduce_sum shape reduce_mean gather_nd top_k startswith tile expand_dims reduce_min range reshape cos square pi sqrt compute_determinant trace clip_by_value eye acos compute_eigenvals matmul reduce_sum reduce_mean expand_dims reduce_min reshape concat compute_curvature shape top_k tile range ones range choice reshape concat reduce_sum reduce_mean set_shape int32 tile abs range py_func reshape concat set_shape int32 tile range py_func batch_distance_matrix reshape concat reduce_sum reduce_mean top_k int32 set_shape tile abs range py_func separable_conv2d expand_dims exp top_k top_k range random_shuffle get_shape value reshape squeeze concat tile gather expand_dims range get_shape value reshape squeeze reverse_v2 concat tile gather expand_dims range get_shape value reshape squeeze concat tile gather expand_dims range get_shape value reshape squeeze concat tile gather expand_dims range get_shape value reshape squeeze concat tile gather expand_dims range evaluate_epoch range normal rotate_point_cloud_by_angle_with_normal has_next_batch next_batch ModelNetH5Dataset has_next_batch reset zeros next_batch reset pointnet_sa_module pointnet_fp_module slice conv1d values list sorted append seg_classes astype keys min int32 zeros arctan2 sqrt shape max ones reshape float len cart2sph mean cross linspace meshgrid kneighbors array range fit join str read_ply_xyzrgb concatenate ones log_string index shape save append range len len split range set append point_cloud_to_volume_v2 expand_dims range tuple astype choice pad vstack append zeros float array range append expand_dims range point_cloud_to_image tuple astype choice pad vstack append zeros float array range write astype close max range open reshape random_uniform gather expand_dims range arange shuffle reshape cos pi dot shape uniform sin zeros array range reshape cos pi dot uniform sin array range randn reshape dot shape zeros range array clip reshape cos dot shape sin zeros array range random range float imwrite waitKey exit mean imshow render zeros require max | ## Efficient Deep Learning for Point Clouds This project is about designing efficient point cloud Deep Neural Networks with pure algorithm (software-level) optimizations. We propose a technique named **Delayed-Aggregation**, which: 1. reduces redundant computation to achieve workload efficiency; 2. exposes parallelism that can be easily captured by the underlying hardware. For the background of point cloud neural networks and how our delayed-aggregation helps improves the execution efficiency, see the [wiki](https://github.com/horizon-research/Efficient-Deep-Learning-for-Point-Clouds/wiki) page. ### Networks Delayed-aggregation applies to a wide range of different point cloud networks. This repo has the implementation for the following five networks: - PointNet++: [Classification](https://github.com/horizon-research/Efficient-Deep-Learning-for-Point-Clouds/tree/master/Networks/pointnet2), [Segmentation](https://github.com/horizon-research/Efficient-Deep-Learning-for-Point-Clouds/tree/master/Networks/pointnet2/part_seg) - DGCNN: [Classification](https://github.com/horizon-research/Efficient-Deep-Learning-for-Point-Clouds/tree/master/Networks/dgcnn), [Segmentation](https://github.com/horizon-research/Efficient-Deep-Learning-for-Point-Clouds/tree/master/Networks/dgcnn/part_seg) - LDGCNN: [Classification](https://github.com/horizon-research/Efficient-Deep-Learning-for-Point-Clouds/tree/master/Networks/ldgcnn) | 2,322 |
hou-yz/DeepCC-local | ['multi object tracking'] | ['Locality Aware Appearance Metric for Multi-Target Multi-Camera Tracking'] | src/hyper_score/models.py src/hyper_score/Utils.py src/hyper_score/dataset.py src/hyper_score/main.py src/hyper_score/dataset_AB.py src/hyper_score/sampler.py SiameseHyperFeat HyperFeat SiameseHyperFeat HyperFeat main MetricNet HyperScoreSampler draw_curve test save_model_as_mat train addzero draw_curve pcb window SiameseHyperFeat SGD DataLoader ArgumentParser save cuda seed L epochs load_state_dict append expanduser parse_args module range format test save_model_as_mat resume mkdir manual_seed load join log_dir add_argument data_path parameters triplet train HyperFeat savemat cuda get time step_size format log_interval criterion backward param_groups float print step metric_net zero_grad lr argmax long enumerate time format log_interval criterion window print L data_path eval numpy softmax cuda float argmax long cat enumerate plot add_subplot close savefig figure legend | # DeepCC-local This repo is based on Ergys Ristani's DeepCC \[[code](https://github.com/ergysr/DeepCC), [paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Ristani_Features_for_Multi-Target_CVPR_2018_paper.pdf)\]. This tracker is based on *MATLAB*. We added multiple functions for performance and utilities, including our *locality-aware* setting reported in our CVPR 2019 workshop paper (to be released). Besides, other dataset support are also added including MOT-16 and AI-City 2019. # AI-City 2019 update ### Setup For AI-City setup, please download the folder from [google drive](https://drive.google.com/drive/folders/1BklU8afXHoLu3xmmOcSFqniD3ZUJqqfJ?usp=sharing). Note that the official AI-City 2019 track-1 dataset also has to be downloaded. This folder only act as a incremental package. The folder we provide contains the re-ID features for demo usage. Before running, please check that the dataset position in `get_opts_aic.m` is changed as your setting. ``` | 2,323 |
hou-yz/MVDet | ['pedestrian detection', 'human detection'] | ['Multiview Detection with Feature Perspective Transformation'] | multiview_detector/models/image_proj_variant.py multiview_detector/evaluation/pyeval/getDistance.py video_visualize.py multiview_detector/datasets/MultiviewX.py multiview_detector/evaluation/pyeval/evaluateDetection.py multiview_detector/datasets/Wildtrack.py multiview_detector/models/persp_trans_detector.py multiview_detector/utils/logger.py multiview_detector/utils/image_utils.py multiview_detector/loss/gaussian_mse.py multiview_detector/models/no_joint_conv_variant.py multiview_detector/utils/projection.py multiview_detector/models/res_proj_variant.py multiview_detector/models/resnet.py multiview_detector/datasets/__init__.py multiview_detector/evaluation/pyeval/CLEAR_MOD_HUN.py multiview_detector/utils/draw_curve.py main.py multiview_detector/utils/meters.py grid_visualize.py multiview_detector/utils/nms.py multiview_detector/datasets/frameDataset.py multiview_detector/evaluation/evaluate.py multiview_detector/trainer.py main _traget_transform test PerspectiveTrainer BBOXTrainer BaseTrainer test frameDataset test MultiviewX Wildtrack test evaluate CLEAR_MOD_HUN getDistance evaluateDetection_py getDistance GaussianMSE ImageProjVariant test NoJointConvVariant test test PerspTransDetector conv1x1 resnext50_32x4d wide_resnet50_2 ResNet resnet50 resnext101_32x8d Bottleneck resnet152 wide_resnet101_2 conv3x3 _resnet resnet34 resnet18 BasicBlock resnet101 ResProjVariant test draw_curve img_color_denormalize add_heatmap_to_image Logger AverageMeter nms get_worldcoord_from_imagecoord get_imagecoord_from_worldcoord draw_curve PerspTransDetector frameDataset ResProjVariant SGD DataLoader Logger save arch cuda seed list basename copyfile cls_thres ImageProjVariant load_state_dict append expanduser range state_dict Wildtrack gt_fpath NoJointConvVariant Compose test eval resume Normalize alpha manual_seed vars PerspectiveTrainer listdir img_color_denormalize join load log_interval MultiviewX print copy_tree tqdm parameters OneCycleLR train epochs makedirs frameDataset tuple FONT_HERSHEY_SIMPLEX grid_reduce COLOR_RGB2BGR VideoWriter get_pos_from_worldgrid VideoWriter_fourcc __getitem__ release list transpose num_cam expanduser COLORMAP_JET range Wildtrack LINE_AA Compose astype float uint8 MultiviewX loadtxt applyColorMap putText write tqdm _traget_transform rectangle read_pom map_kernel zeros numpy reducedgrid_shape cvtColor len arange get_worldcoord_from_imagecoord worldgrid_shape show shape imshow meshgrid append sum stack reshape minimum norm product get_imagecoord_from_worldcoord print maximum get_worldcoord_from_pos cd evaluateDetection start_matlab int T inf ones reshape getDistance where array zeros max range len concatenate loadtxt CLEAR_MOD_HUN where zeros array len model DataLoader ImageProjVariant iter next NoJointConvVariant PerspTransDetector ResNet load_state_dict load_state_dict_from_url ResProjVariant plot add_subplot close savefig figure legend fromarray uint8 asarray COLOR_BGR2RGB applyColorMap size min COLOR_RGB2BGR resize COLORMAP_JET max cvtColor norm sort min long len inv delete concatenate delete concatenate | # Multiview Detection with Feature Perspective Transformation [[Website](https://hou-yz.github.io/publication/2020-eccv2020-mvdet)] [[arXiv](https://arxiv.org/abs/2007.07247)] ``` @inproceedings{hou2020multiview, title={Multiview Detection with Feature Perspective Transformation}, author={Hou, Yunzhong and Zheng, Liang and Gould, Stephen}, booktitle={ECCV}, year={2020} } ``` Please visit [link](https://github.com/hou-yz/MVDeTr) for our new work MVDeTr, a transformer-powered multiview detector that achieves new state-of-the-art! | 2,324 |
houssamzenati/Adversarially-Learned-Anomaly-Detection | ['anomaly detection'] | ['Adversarially Learned Anomaly Detection'] | dsebm/svhn_utilities.py alad/run.py dagmm/gmm_utils.py toy_experiments/utils/data_utils.py data/cifar10.py alad/cifar10_utilities.py alad/arrhythmia_utilities.py data/kdd.py alad/svhn_utilities.py data/svhn.py utils/evaluations.py anogan/svhn_utilities.py toy_experiments/utils/data_gmm.py dsebm/arrhythmia_utilities.py utils/constants.py anogan/arrhythmia_utilities.py anogan/cifar10_utilities.py anogan/run.py anogan/kdd_utilities.py alad/kdd_utilities.py dsebm/run.py toy_experiments/utils/utils.py dsebm/kdd_utilities.py dagmm/kdd_utilities.py utils/sn.py main.py dagmm/run.py toy_experiments/utils/rng.py utils/adapt_data.py data/arrhythmia.py utils/plot_reconstructions.py dagmm/arrhythmia_utilities.py dsebm/cifar10_utilities.py path run discriminator_zz decoder leakyReLu encoder discriminator_xz discriminator_xx discriminator_zz decoder leakyReLu encoder discriminator_xz discriminator_xx discriminator_zz decoder leakyReLu encoder discriminator_xz discriminator_xx get_getter create_logdir display_progression_epoch train_and_test display_parameters run discriminator_zz decoder leakyReLu encoder discriminator_xz discriminator_xx generator discriminator leakyReLu generator discriminator leakyReLu generator discriminator leakyReLu get_getter create_logdir display_progression_epoch train_and_test display_parameters run generator discriminator leakyReLu encoder feature_extractor estimator decoder add_noise tricky_divide tricky_multiply compute_energy_and_penalty encoder feature_extractor estimator decoder reconstruction_error display_progression_epoch create_logdir train_and_test display_parameters run _get_dataset get_valid get_train get_test get_shape_input get_shape_label _to_xy get_anomalous_proportion get_shape_input_flatten _get_adapted_dataset _get_dataset get_valid get_train num_classes get_test get_shape_input get_shape_label get_anomalous_proportion get_shape_input_flatten _unpickle_file _get_adapted_dataset _get_dataset get_valid get_train _adapt_ratio get_test get_shape_input _encode_text_dummy get_shape_label _col_names _to_xy get_anomalous_proportion _get_adapted_dataset load get_valid get_train maybe_download get_test get_shape_input get_anomalous_proportion get_shape_input_flatten _get_adapted_dataset network UnPooling2x2ZeroFilled network UnPooling2x2ZeroFilled network UnPooling2x2ZeroFilled create_logdir display_progression_epoch train_and_test display_parameters run network UnPooling2x2ZeroFilled GMM_distribution plot_GMM sample_GMM iter_data OneHot patch center_crop shuffle ToyDataset list_shuffle tensor_shuffle set_seed zipp dropout unzip _p get_minibatches_idx batch_fill adapt_labels_outlier_task do_roc do_hists save_grid_plot do_cumdist save_results make_meshgrid do_hist get_percentile save_results_csv heatmap predict plot_hist_dis_reconstructions dense conv2d spectral_norm format model print error import_module info split print int str chr write flush format getLogger batch_size warn Saver discriminator_xx get_valid discriminator_zz placeholder get_test Supervisor encoder format RandomState get_train copy latent_dim create_logdir import_module info ConfigProto int learning_rate decoder Variable float32 reduce_mean display_parameters bool discriminator_xz str gpu generator random_normal discriminator gen dis norm l2_normalize multiply reduce_sum reduce_mean scalar xavier_initializer matrix_determinant exp matrix_inverse squeeze matmul pi reduce_sum sqrt matrix_diag_part tile expand_dims log cosine_distance concat get_shape_input warning pca extract_features estimator reshape feature_extractor print astype float32 MinMaxScaler transform train_test_split loadmat fit print _get_dataset append columns format debug transpose reshape array join format concatenate extractall warn append _unpickle_file makedirs train_test_split adapt_labels_outlier_task concatenate copy _to_xy _encode_text_dummy _col_names sample read_csv _adapt_ratio drop columns format get_dummies int permutation RandomState concatenate load astype float32 rescale join urlretrieve makedirs flatten join loadmat maybe_download as_list reshape concat shape stack get_or_create_global_step int32 network set_aspect subplots set_title set_xlim axis tight_layout scatter savefig set_ylim int round randint permutation arange len print permutation arange isinstance flatten max ones get int range len RandomState Random items set_value OrderedDict items get_value append range arange shuffle binomial shape ones int list plot xlabel roc_curve close ylabel ylim title savefig figure legend xlim auc list plot xlabel sort makedirs ylabel title savefig figure float array range len percentile show format close title hist savefig figure legend show format close title hist savefig figure legend max range len meshgrid arange update join subplot set_aspect format set_xticklabels set_yticklabels axis GridSpec imshow split savefig figure zip enumerate makedirs do_roc do_hists do_cumdist format print do_hist get_percentile astype save_results_csv precision_recall_fscore_support array makedirs join format reshape tostring_rgb fromstring add_subplot draw savefig figure makedirs join makedirs join close title hist vstack figure savefig range walk as_list reshape transpose matmul reduce_sum range get_variable | # Adversarially-Learned-Anomaly-Detection ALAD (Proceedings of IEEE ICDM 2018) official code The code for the paper ["Adversarially Learned Anomaly Detection" (authors: Houssam Zenati*, Manon Romain*, Chuan Sheng Foo*, Bruno Lecouat, Vijay Ramaseshan Chandrasekhar)](https://arxiv.org/abs/1812.02288) is now open source! Please reach us via emails or via github issues for any enquiries! Please cite our work if you find it useful for your research and work. ``` @article{Zenati2018AdversariallyLA, title={Adversarially Learned Anomaly Detection}, author={Houssam Zenati and Manon Romain and Chuan Sheng Foo and Bruno Lecouat and Vijay R. Chandrasekhar}, journal={2018 IEEE International Conference on Data Mining (ICDM)}, | 2,325 |
houssamzenati/Efficient-GAN-Anomaly-Detection | ['anomaly detection'] | ['Adversarially Learned Anomaly Detection', 'Efficient GAN-Based Anomaly Detection'] | utils/adapt_data.py data/mnist.py bigan/run_kdd.py bigan/mnist_utilities.py bigan/kdd_utilities.py gan/mnist_utilities.py gan/kdd_utilities.py data/kdd.py gan/run_mnist.py main.py gan/run_kdd.py bigan/run_mnist.py utils/evaluations.py path run decoder discriminator leakyReLu encoder _leakyReLu_impl decoder discriminator leakyReLu encoder _leakyReLu_impl get_getter create_logdir display_progression_epoch train_and_test display_parameters run get_getter create_logdir display_progression_epoch train_and_test display_parameters run _get_dataset get_train get_test _adapt get_shape_input _encode_text_dummy get_shape_label _col_names _to_xy _get_adapted_dataset get_train num_classes get_test get_shape_input get_shape_label get_shape_input_flatten _get_adapted_dataset generator discriminator leakyReLu _leakyReLu_impl generator discriminator leakyReLu _leakyReLu_impl get_getter create_logdir display_progression_epoch train_and_test display_parameters run get_getter create_logdir display_progression_epoch train_and_test display_parameters run adapt_labels do_prc format d print error m rd w import_module example info label dataset nb_epochs split print int str chr write flush getLogger batch_size warn placeholder get_test discriminator Supervisor encoder format RandomState get_train copy latent_dim create_logdir info int learning_rate decoder float32 display_parameters bool transform astype float32 copy _to_xy _encode_text_dummy _col_names sample MinMaxScaler read_csv fit _adapt _get_dataset drop columns format get_dummies append columns int permutation RandomState concatenate load int permutation concatenate reshape astype float32 adapt_labels generator random_normal gen dis xlabel makedirs close ylabel precision_recall_curve ylim title savefig figure fill_between xlim step auc | # Efficient-GAN-Based Anomaly Detection Official implementation of the prepublished article submitted to the ICLRW 2018: https://arxiv.org/abs/1802.06222 NEW! Updated version of this work in ["Adversarially Learned Anomaly Detection" paper](https://arxiv.org/abs/1812.02288)! Anomaly Detection materials, by the Deep Learning 2.0 team in I2R, A*STAR, Singapore Please reach us via emails or via github issues for any enquiries! Please cite our work if you find it useful for your research and work: ``` @article{zenati2018, author = {Houssam Zenati and Chuan Sheng Foo and | 2,326 |
hphuongdhsp/Q-Newton-method | ['stochastic optimization', 'protein folding'] | ["A fast and simple modification of Newton's method helping to avoid saddle points"] | src/methods.py src/cubic_reg2.py src/functionsDirectRun6.py src/main.py src/Experiment.py src/cubic_reg.py src/params.py src/functions.py src/functionsDirectRun.py StochasticGriewank.py src/utils.py src/protein.py NewtonMethod LocalNewQNewton LocalBacktrackingGD NewQNewton InertialNewtonM TwoWayBacktrackingGD LocalBFGS mainmain f3HessianS LocalNewtonMethod f3DerS f3aDer f3Der LocalUnboundedTwoWayBacktrackingGD f3aHessian RandomNewtonMethod f3b f3cHessian f3S LocalTwoWayBacktrackingGD f3dHessian f3bHessian RandomNewQNewton LocalRandomNewtonMethod LocalRandomNewQNewton f3 LocalAdaptiveCubicRegularisation f3cDer f3d f3Hessian BacktrackingGD f3bDer f3dDer f3c UnboundedTwoWayBacktrackingGD f3a LocalInertialNewtonM _AuxiliaryProblem AdaptiveCubicReg CubicRegularization Algorithm _AuxiliaryProblem AdaptiveCubicReg CubicRegularization Algorithm Experiment f18 f8Der f6 f1Der f21Der f12 f16Der f9 f7 f11Hessian f14Der f17 f7Hessian f22Hessian f15Hessian f4 f1 f5Hessian f1Hessian f9Hessian f16 f4Der f3Der f5 f5Der f12Der f2Der f14 f17Hessian f20 f20Hessian f18Der f19Hessian f20Der f6Der f7Der f2Hessian f21Hessian f10Hessian f6Hessian f9Der f15 f8 f16Hessian f22 f11 f18Hessian f3 f8Hessian f3Hessian f10Der f14Hessian f10 f2 f13Hessian f17Der f19Der f21 f22Der f11Der f19 f4Hessian f13Der f15Der f12Hessian f13 f29 f21Der f7 f17 f7Hessian f24 f23Der f3Der f5 f17Hessian f2Der f27Hessian f33Der f25Hessian f34 f2Hessian f32Hessian f25Der f6Hessian f38 f22 f28 f36Der NewQNewton f24Der f14Der f26Der f15Hessian f16 f32 f20 RandomNewtonMethod f26 f6Der f36 f21Hessian f15 RandomNewQNewton f34Der f30 f32Der f35 f8Hessian f10Der f22Der f2 f17Der f19Der f19 f30Hessian f4Hessian f35Der f15Der f13 f37Hessian f33Hessian f1Der f12 f16Der f9 TwoWayBacktrackingGD f23Hessian f28Hessian f4 f1 f35Hessian f26Hessian f33 f12Der f5Der f14 f20Der f7Der f9Der f27Der f8 f16Hessian f11 f18Hessian f3 main f27 f3Hessian f14Hessian f10 f13Hessian f31Hessian f37Der f11Der UnboundedTwoWayBacktrackingGD f29Der f18 f8Der NewtonMethod f6 BFGS f22Hessian f34Hessian InertialNewtonM f11Hessian f5Hessian f1Hessian f9Hessian f25 f4Der f38Der f20Hessian f23 f31Der f18Der f19Hessian AdaptiveCubicRegularisation f24Hessian f10Hessian f37 f38Hessian f30Der f29Hessian f21 BacktrackingGD f31 f13Der f28Der f36Hessian f12Hessian f29 f21Der f7 f17 f7Hessian f43Hessian f24 f40Initialization f45Hessian f47Der f23Der f3Der f5 f17Hessian f2Der f27Hessian f33Der f25Hessian f42 f34 f2Hessian f32Hessian f46 f25Der f6Hessian f38 f22 f43 f46Constraint LocalAdaptiveCubicRegularisation f44Hessian f28 f36Der f41Der f44Der LocalNewQNewton LocalBacktrackingGD NewQNewton f24Der f14Der f26Der f15Hessian f16 f42Constraint f45Der f46Der f32 f20 f45Initialization RandomNewtonMethod f26 f44Initialization f6Der f36 f21Hessian f15 f42Der RandomNewQNewton f34Der f30 f32Der f35 f8Hessian f10Der f22Der f2 f17Der f19Der f45Constraint f47 f19 f30Hessian f4Hessian f35Der f39Hessian f15Der f13 f37Hessian LocalInertialNewtonM f33Hessian f1Der f12 f16Der f9 f43Constraint TwoWayBacktrackingGD f41 f23Hessian LocalBFGS f28Hessian f4 f1 f35Hessian f26Hessian f33 f12Der f5Der f43Der LocalUnboundedTwoWayBacktrackingGD f14 f41Constraint f45 f20Der f40Constraint f7Der f9Der f27Der f8 f16Hessian LocalRandomNewtonMethod LocalRandomNewQNewton f11 f18Hessian f3 f27 f40Hessian f3Hessian f14Hessian f10 f13Hessian f31Hessian f37Der f11Der f47Hessian f46Hessian UnboundedTwoWayBacktrackingGD f29Der f42Initialization f43Initialization f18 f8Der NewtonMethod f6 BFGS f22Hessian f34Hessian InertialNewtonM f11Hessian constraintChect f5Hessian f1Hessian f41Hessian LocalNewtonMethod f9Hessian f25 f39Der f46Initialization f4Der f38Der f41Initialization f42Hessian f44 f20Hessian f44Constraint f23 f39 f31Der f18Der f19Hessian AdaptiveCubicRegularisation LocalTwoWayBacktrackingGD f24Hessian f10Hessian f37 f38Hessian f40Der f40 f30Der f29Hessian f21 BacktrackingGD f31 f13Der f28Der f36Hessian f12Hessian main NewtonMethod BFGS InertiaNewtonMethod RandomNewtonMethod NewQNewtonMethod NewtonMethod LocalNewQNewton BFGS LocalBacktrackingGD NewQNewton InertialNewtonM f46a f46c TwoWayBacktrackingGD f46fDer LocalBFGS f46cHessian LocalNewtonMethod f46e LocalUnboundedTwoWayBacktrackingGD f46bHessian f46bInitialization RandomNewtonMethod V2 f46aInitialization f46b f46f LocalTwoWayBacktrackingGD AdaptiveCubicRegularisation f46dDer f46dHessian f46bDer f46eDer RandomNewQNewton LocalRandomNewtonMethod f46aConstraint LocalRandomNewQNewton LocalAdaptiveCubicRegularisation V1 f46d f46aDer f46eHessian BacktrackingGD f46fHessian f46bConstraint f46aHessian f46cDer UnboundedTwoWayBacktrackingGD LocalInertialNewtonM L2Norm2 dist constraintChect CheckCriticalType ArmijoCondition2 cutoff UnboundedLR NegativeOrthogonalDecomposition normal time print transpose inv f matmul fDer fHessian range normal time print transpose inv f matmul fDer fHessian range normal time print transpose inv random f matmul fDer fHessian range normal time print transpose inv random f matmul fDer fHessian range L2Norm2 normal time print transpose inv f identity matmul fDer fHessian cutoff range NegativeOrthogonalDecomposition L2Norm2 normal time print transpose inv f identity matmul fDer fHessian cutoff range NegativeOrthogonalDecomposition L2Norm2 normal time print transpose inv random identity matmul fDer f fHessian cutoff range NegativeOrthogonalDecomposition L2Norm2 normal time print transpose inv random identity matmul fDer f fHessian cutoff range NegativeOrthogonalDecomposition normal time print fmin_bfgs f fDer fHessian range normal time print f fDer fHessian AdaptiveCubicReg adaptive_cubic_reg range L2Norm2 normal time print f fDer sqrt delta0 alpha beta fHessian range L2Norm2 normal time print f fDer sqrt delta0 alpha beta fHessian range L2Norm2 normal time print f fDer sqrt delta0 alpha beta fHessian range append L2Norm2 normal time print f fDer sqrt delta0 alpha beta fHessian range append L2Norm2 normal time print f fDer sqrt delta0 alpha beta fHessian range append len L2Norm2 normal time print f fDer sqrt delta0 alpha beta fHessian range append len normal time print f fDer fHessian array range normal time print f fDer fHessian array range sum array cos range Gradient f3GS f3HS Hessian sum range array f3S array range f3DerS eig array range f3HessianS sqrt cos range Gradient f3aG f3aH eig Hessian sqrt cos range Gradient f3bG eig f3bH Hessian sqrt cos range Gradient f3cG f3cH eig Hessian sqrt cos range Gradient f3dG eig f3dH Hessian normal f1Der print f1 type array NewtonMethod BFGS print InertiaNewtonMethod RandomNewtonMethod NewQNewtonMethod enumerate abs abs abs abs abs abs exp exp exp sin cos sin cos sin cos cos sin cos sin f4 array f4Hessian eig array array eig array exp exp exp f4 array f4Hessian eig array array eig array f11 array eig array abs array eig array abs array eig array sin cos sin cos sin cos sin sin cos sin array eig array array eig array array eig array array eig array abs eig exp cos e pi sqrt sum Gradient f24G eig f24H Hessian sum cos pi Gradient f25G f25H eig Hessian sum full range Gradient f26G eig f26H Hessian Gradient f27G eig f27H Hessian Gradient f28G eig f28H Hessian sqrt abs Gradient f29G eig f29H Hessian pi sin Gradient f30G eig f30H Hessian Gradient f31G eig f31H Hessian exp cos Gradient f32G eig f32H Hessian exp pi sqrt sin abs Gradient f33G eig f33H Hessian sqrt abs sin Gradient f34G eig f34H Hessian sin Gradient f35G eig f35H Hessian sin Gradient f36G f36H eig Hessian abs cos sin Gradient f37G eig f37H Hessian sum Gradient f38G eig f38H Hessian print time fmin_bfgs print time adaptive_cubic_reg AdaptiveCubicReg NewtonMethod BFGS print NewQNewton BacktrackingGD InertialNewtonM RandomNewQNewton RandomNewtonMethod array exp sum log matmul Gradient f39G eig f39H Hessian sqrt Gradient f40G eig f40H Hessian sqrt array Gradient f41G eig f41H Hessian array Gradient f42G Gradient f42H array sum array Gradient f43G Gradient f43H array Gradient f44G f44H eig Hessian array Gradient f45G f45H eig Hessian array cos sin zeros sum range Gradient f46G f46H eig Hessian range array range sum array cos range Gradient f47G eig f47H Hessian fHessian fHessian f46Constraint update CheckCriticalType L2Norm2 time print transpose inv identity matmul fDer fHessian CheckCriticalType cutoff range NegativeOrthogonalDecomposition CheckCriticalType append time print fDer CheckCriticalType range cos V1 cos sqrt V2 sin Gradient f46aG Gradient f46aH uniform pi choice V1 cos sqrt V2 sin Gradient f46bG eig f46bH Hessian choice V1 cos sqrt V2 sin Gradient f46cG eig f46cH Hessian V1 cos sqrt V2 sin Gradient f46dG f46dH eig Hessian V1 cos sqrt V2 sin Gradient f46eG f46eH eig Hessian V1 cos sqrt V2 sin Gradient f46fG eig f46fH Hessian print fHessian sqrt flatten sum L2Norm2 f fDer dot eig range real abs | # A New Q-Newton's method avoiding saddle points ### This repository is accompanying with [paper](https://arxiv.org/pdf/2006.01512.pdf): ### A modification of quasi-Newton's methods helping to avoid saddle points, Truong Trung Tuyen, Tat Dat To, Tuan Hang Nguyen, Thu Hang Nguyen, Hoang Phuong Nguyen, Maged Helmy. ## Prerequisites ``` scipy algopy numdifftools matplotlib | 2,327 |
hqminh/gp_sketch_nips | ['gaussian processes'] | ['Revisiting the Sample Complexity of Sparse Spectrum Approximation of Gaussian Processes'] | experiments.py gps/ssgp.py vaegp.py gps/fgp.py covs/kernel.py mvae.py embedding_plot.py utils/utility.py parse_res.py main cluster_coloring Experiment MixtureVAE MixtureGaussianNet GaussianNet main parse_res create_std_plot plot_res deploy VAEGP GP_wrapper FGP set_seed generate_data ts dt nll_test diabetes_data gas_sensor_data get_gpu_memory_map rmse abalone_data get_cuda_device sample_rows scatter savefig figure append range fit load cluster_coloring dt vae load_data fit_transform list arange errorbar yticks xlabel ylabel parse_res savefig legend create_std_plot xticks keys zeros enumerate rc figure plot_res mkdir dict to get_cuda_device mm format std reshape mean to get_cuda_device array len dict load_diabetes get_cuda_device split load join int print mean vstack save float get_cuda_device std len append random range float view MultivariateNormal seed manual_seed list check_output dict pprint zip range len | hqminh/gp_sketch_nips | 2,328 |
hqqasw/person-search-PPCC | ['person search', 'person re identification', 'video retrieval'] | ['Person Search in Videos with One Portrait Through Visual and Temporal Links'] | utils/__init__.py utils/metric.py utils/feat_reader.py utils/propfunc.py utils/gpu_propfunc.py propagation.py matching.py run_in_movie run_across_movie run_lp run_ccpp run_in_movie run_across_movie read_affmat_across_movies read_affmat_of_one_movie parse_label read_across_movie_meta read_meta read_feat_of_one_movie read_feat_across_movies gpu_lp gpu_softmax gpu_ccpp affmat2retlist get_topk get_AP unique get_mAP affmat2retdict lp softmax ccpp join format print read_affmat_of_one_movie parse_label affmat2retlist maximum get_topk len read_meta get_mAP enumerate affmat2retdict join format print read_affmat_across_movies affmat2retlist maximum get_topk read_across_movie_meta get_mAP affmat2retdict T lp shape gpu_lp zeros range T ccpp gpu_ccpp shape zeros range run_ccpp run_lp run_ccpp run_lp add enumerate set list add set append keys enumerate len load join format isfile load join format isfile load join format isfile load join format isfile unsqueeze exp matmul shape gpu_softmax tensor numpy range diag shape numpy gpu_softmax tensor range max zeros argsort range append append range enumerate len argsort tolist range append add set unique enumerate items list get_AP keys len exp copy dot shape softmax range int logical_not shape logical_or softmax zeros sum max range | # Person Search by Progressive Propagation via Competitive Consensus (PPCC) This is the implement of our [ECCV 2018](https://eccv2018.org/) paper ***Person Search in Videos with One Portrait Through Visual and Temporal Links***. Qingqiu Huang, Wentao Liu, Dahua Lin. ECCV 2018, Munich. This project is based on our person search dataset -- ***Cast Search in Movies (CSM)*** . More details about this dataset can be found in our [project page](http://qqhuang.cn/projects/eccv18-person-search/). ## Basic Usage 1. Download the affinity matrices and meta data of CSM from [Google Drive](https://drive.google.com/drive/folders/1u51eRnZS1rQaM7GStPTQKB0BugpnGM9W?usp=sharing) or [Baidu Wangpan](https://pan.baidu.com/s/1JG30kPTWxJmf1saA0e6CLQ) 2. Put affnity matrix in "\*\*/data/affinity" and meta data in "\*\*/data/meta". Here "**" means the path that you clone this project to. | 2,329 |
hqucv/siamban | ['visual tracking'] | ['Siamese Box Adaptive Network for Visual Tracking'] | toolkit/datasets/dataset.py tools/tune.py training_dataset/yt_bb/gen_json.py siamban/models/neck/__init__.py siamban/utils/bbox.py toolkit/visualization/draw_success_precision.py siamban/models/head/__init__.py training_dataset/coco/pycocotools/setup.py vot_siamban/vot_siamban.py toolkit/visualization/draw_eao.py tools/eval.py siamban/datasets/dataset.py training_dataset/coco/visual.py toolkit/evaluation/ope_benchmark.py setup.py training_dataset/coco/pycocotools/mask.py toolkit/visualization/draw_utils.py training_dataset/det/par_crop.py toolkit/evaluation/eao_benchmark.py training_dataset/coco/pycocotools/coco.py toolkit/evaluation/__init__.py toolkit/evaluation/f1_benchmark.py toolkit/evaluation/ar_benchmark.py tools/test.py training_dataset/vid/visual.py tools/demo.py siamban/datasets/point_target.py training_dataset/vid/gen_json.py toolkit/utils/statistics.py training_dataset/yt_bb/par_crop.py siamban/models/backbone/resnet_atrous.py siamban/models/init_weight.py vot_siamban/vot.py training_dataset/coco/pycocotools/cocoeval.py toolkit/datasets/video.py toolkit/datasets/uav.py siamban/models/model_builder.py tools/hp_search.py training_dataset/got_10k/visual.py toolkit/visualization/__init__.py training_dataset/coco/par_crop.py training_dataset/lasot/parse_lasot.py toolkit/datasets/nfs.py training_dataset/det/visual.py siamban/utils/lr_scheduler.py toolkit/datasets/otb.py toolkit/datasets/got10k.py siamban/tracker/siamban_tracker.py training_dataset/lasot/gen_json.py siamban/utils/misc.py training_dataset/vid/par_crop.py tools/test_epochs.py tools/train.py training_dataset/det/gen_json.py siamban/utils/distributed.py training_dataset/got_10k/par_crop.py training_dataset/got_10k/parse_got10k.py toolkit/datasets/vot.py training_dataset/lasot/par_crop.py siamban/models/backbone/alexnet.py toolkit/utils/__init__.py toolkit/utils/misc.py training_dataset/yt_bb/visual.py training_dataset/got_10k/gen_json.py siamban/core/config.py siamban/utils/log_helper.py siamban/core/xcorr.py siamban/utils/point.py toolkit/datasets/trackingnet.py training_dataset/yt_bb/checknum.py siamban/models/head/ban.py toolkit/datasets/lasot.py training_dataset/coco/pycocotools/__init__.py siamban/models/backbone/mobile_v2.py training_dataset/lasot/visual.py siamban/models/neck/neck.py siamban/tracker/base_tracker.py siamban/datasets/augmentation.py siamban/models/loss.py training_dataset/coco/gen_json.py siamban/utils/average_meter.py siamban/models/iou_loss.py toolkit/visualization/draw_f1.py siamban/models/backbone/__init__.py siamban/tracker/tracker_builder.py siamban/utils/model_load.py training_dataset/vid/parse_vid.py toolkit/datasets/__init__.py xcorr_depthwise xcorr_fast xcorr_slow Augmentation SubDataset BANDataset PointTarget init_weights IOULoss select_cross_entropy_loss weight_l1_loss select_iou_loss get_cls_loss ModelBuilder AlexNet AlexNetLegacy alexnetlegacy alexnet conv_1x1_bn InvertedResidual conv_bn mobilenetv2 MobileNetV2 ResNet resnet50 Bottleneck conv3x3 resnet34 resnet18 BasicBlock get_backbone DepthwiseBAN BAN DepthwiseXCorr MultiBAN UPChannelBAN get_ban_head AdjustLayer AdjustAllLayer get_neck BaseTracker SiameseTracker SiamBANTracker build_tracker AverageMeter Meter center2corner get_axis_aligned_bbox rect1_2_cxy_wh rect_2_cxy_wh cxy_wh_2_rect1 get_min_max_bbox IoU corner2center cxy_wh_2_rect reduce_gradients DistModule get_world_size _get_local_ip broadcast_buffers broadcast_params get_rank average_reduce _dist_init dist_init Filter log_once Dummy init_log get_format get_format_custom find_caller main LogOnce print_speed add_file_handler LRScheduler WarmUPScheduler _build_lr_scheduler build_lr_scheduler LogScheduler LinearStepScheduler Net CosStepScheduler _build_warm_up_scheduler MultiStepScheduler StepScheduler _bold commit _describe describe _exec _color load_pretrain remove_prefix restore_from check_keys Point Dataset GOT10kDataset GOT10kVideo LaSOTDataset LaSOTVideo NFSDataset NFSVideo OTBDataset OTBVideo TrackingNetVideo TrackingNetDataset UAVVideo UAVDataset Video VOTLTDataset VOTLTVideo VOTVideo VOTDataset DatasetFactory AccuracyRobustnessBenchmark EAOBenchmark F1Benchmark OPEBenchmark determine_thresholds calculate_accuracy calculate_expected_overlap calculate_f1 overlap_ratio success_overlap success_error determine_thresholds calculate_failures draw_eao draw_f1 draw_success_precision main get_frames main parse_range_int run_tracker parse_range _check_and_occupation main seed_torch build_opt_lr build_data_loader log_grads main train eval SiamBANTracker objective crop_hwc printProgress pos_s_2_bbox main crop_img crop_like_SiamFC COCO _isArrayLike Params COCOeval encode decode area toBbox crop_hwc crop_xml printProgress pos_s_2_bbox main crop_like_SiamFC crop_hwc printProgress crop_video pos_s_2_bbox main crop_like_SiamFC crop_hwc printProgress crop_video pos_s_2_bbox main crop_like_SiamFC check_borders check_size crop_hwc printProgress crop_video pos_s_2_bbox main crop_like_SiamFC parse_and_sched crop_hwc printProgress parse_and_sched pos_s_2_bbox dl_and_cut crop_like_SiamFC VOT setup_tracker warmup view conv2d append range cat conv2d view conv2d size view data fill_ isinstance Conv2d modules zero_ BatchNorm2d kaiming_normal_ index_select cuda view get_cls_loss abs sum BAN reshape cuda index_select MobileNetV2 ResNet ResNet ResNet isinstance isinstance minimum maximum norm size min mean sqrt max mean size min max all_reduce get_world_size FloatTensor list values broadcast all_reduce _all_buffers get_world_size broadcast int init_process_group set_device get_world_size device_count SOCK_DGRAM socket connect AF_INET _dist_init data requires_grad log_once format all_reduce parameters int Filter format addFilter Formatter int Filter format addFilter Formatter setFormatter format_func getLogger addHandler StreamHandler add setLevel setFormatter getLogger addHandler get_format FileHandler floor info getLogger f_back co_filename list basename hasattr normcase f_code current_frame log str format getLogger print debug error init_log warning info enumerate critical _build_lr_scheduler EPOCH LR LR_WARMUP WARMUP popen requires_grad format named_children len training named_parameters append _color join format _exec dirname abspath append _describe len format set info keys len format info load remove_prefix format check_keys load_state_dict info current_device load remove_prefix check_keys load_state_dict current_device inf isinstance ones sort flatten floor linspace array len len min nanmean vot_overlap_traj range len minimum maximum arange ones overlap_ratio zeros float sum range len ones power sqrt zeros float sum range len astype int32 zeros mean zeros sum array enumerate len float32 logical_not isnan any zeros sum range grid add_subplot pi set_visible linspace max values show list legend append set_thetagrids format plot zip enumerate items set_yticks min figure array set_ylim subplots arange grid axis xticks argmax values yticks set_aspect show list sorted ylabel title legend plot mean enumerate items xlabel subplots arange grid axis xticks yticks set_aspect show list sorted ylabel title legend plot autoscale mean keys enumerate items xlabel join sorted VideoCapture read glob video_name imread range config VideoCapture get_frames polylines save device VideoWriter CAP_PROP_FPS round VideoWriter_fourcc build_tracker release addWeighted list transpose waitKey map imshow ModelBuilder selectROI WND_PROP_FULLSCREEN merge_from_file get track astype eval init video_name int uint8 namedWindow write rectangle int32 LaSOTDataset UAVDataset dataset tracker_path VOTLTDataset NFSDataset dirname AccuracyRobustnessBenchmark glob realpath show_result set_tracker join OPEBenchmark min OTBDataset EAOBenchmark tracker_prefix num VOTDataset F1Benchmark len list map split list map split format vot_overlap track getTickFrequency print get_axis_aligned_bbox init append array getTickCount enumerate makedirs isfile vot_overlap get_axis_aligned_bbox FONT_HERSHEY_SIMPLEX destroyAllWindows getTickCount name append getTickFrequency putText create_dataset array makedirs seed str manual_seed DistributedSampler BAN DataLoader info BANDataset TRAIN_LAYERS build_lr_scheduler isinstance START_EPOCH SGD parameters eval modules BatchNorm2d train step ADJUST items list norm weights_grads replace add_scalar reduce_gradients describe model clip_grad_norm_ zero_grad save dataset GRAD_CLIP sorted list BATCH_SIZE START_EPOCH len get_rank module SNAPSHOT_DIR update format EPOCH param_groups get_world_size is_valid_number avg info average_reduce item print_speed enumerate items time add_scalar build_opt_lr get_cur_lr backward AverageMeter parameters log_grads step makedirs commit PRETRAINED backbone add_file_handler dist_init START_EPOCH LOG_DIR RESUME restore_from SummaryWriter DistModule build_data_loader cfg INFO build_opt_lr dumps load_pretrain train list OPEBenchmark F1Benchmark mean EAOBenchmark eval_success show_result set_tracker values vot_overlap get_axis_aligned_bbox dataset destroyAllWindows getTickCount name SiamBANTracker append format LR track getTickFrequency eval suggest_uniform init info PENALTY_K enumerate join print WINDOW_INFLUENCE array makedirs str int format write float round flush warpAffine float astype sqrt crop_hwc sum pos_s_2_bbox join format imwrite mean imread crop_like_SiamFC enumerate makedirs COCO imgs mkdir shape join parse imwrite replace format mean find findall imread crop_like_SiamFC enumerate makedirs sorted join sorted format imwrite int glob loadtxt print makedirs mean imread crop_like_SiamFC enumerate open listdir sqrt float prod parse replace text find findall int join str iterrows dump print from_csv unique open str join format imwrite int iterrows check_call mean imread crop_like_SiamFC template cuda range merge_from_file eval ModelBuilder build_tracker warmup | # SiamBAN This project hosts the code for implementing the SiamBAN algorithm for visual tracking, as presented in our paper: ``` @inproceedings{siamban, title={Siamese Box Adaptive Network for Visual Tracking}, author={Chen, Zedu and Zhong, Bineng and Li, Guorong and Zhang, Shengping and Ji, Rongrong}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={6668--6677}, year={2020} } | 2,330 |
hrashkin/plotmachines | ['story generation'] | ['PlotMachines: Outline-Conditioned Generation with Dynamic Plot State Tracking'] | src/model/train.py src/model/data_loader.py src/model/loss.py src/model/model.py src/model/parallel.py src/model/generate_stories.py src/model/generate_utils.py src/model/logger.py src/model/eval_utils.py src/preprocessing/extract_outlines.py ParagraphWithMemoryDataset get_paragraph_input_loader ParagraphDataset FullStoryDataset get_fullstory_loader get_paragraph_memory_input_loader format_text evaluate_doc_model clear_dirs get_average_scores tfmclassifier generatedocs generate_paragraph init main generate_paragraph toks_to_str Logger ParagraphLoss LMLoss PlotMachinesModel GPT2BaseModel GatedMemoryUpdate GPT2MemoryBlock GPT2NeighborModel MemoryAttention GPT2NeighborLMHeadModel GPT2MemLMHeadModel GPT2MemModel CallbackContext allreduce AllReduce DataParallelModel _criterion_parallel_apply execute_replication_callbacks DistributedDataParallelModel Reduce patch_replication_callback DataParallelCriterion evaluate load_checkpoint print_model_params run_batch get_loss_value save_checkpoint init main run_epoch get_average_scores clean_top_features convert_keys_to_str trim_body sorting ParagraphDataset FullStoryDataset ParagraphWithMemoryDataset glob remove format makedirs join format replace list write dumps Rouge get_scores keys range len format print min len tqdm eval range open model eval unsqueeze tensor sum cuda range len str replace view model size device_count item append range remove move print tqdm eval _exists dump_to_file open seed manual_seed_all manual_seed from_pretrained DataParallelModel device save_dir add_special_tokens list GPT2BaseModel p debug_mode data_dir k device_count load_state_dict n_ctx to bodynum decoding_strategy get eval init beam get_fullstory_loader keys load join PlotMachinesModel n_batch load_dir accum_iter print generatedocs gen_len len toks_to_str append convert_tokens_to_string convert_ids_to_tokens item join isinstance _worker len start is_grad_enabled append range Lock list hasattr __data_parallel_replicate__ modules enumerate len replicate to compute_loss_fct model print join copy save load items list isinstance load_state_dict Tensor cuda values print format get_average_scores enumerate zero_grad save_checkpoint rouge_summary str list get_loss_value set_postfix state_dict format float enumerate items evaluate backward print run_batch tqdm empty_cache train step scalar_summary len str print close write sum open join print output_dir experiment_name makedirs val_log_interval use_offline_gpt2 get_paragraph_memory_input_loader ParagraphLoss Logger output_dir DataParallelCriterion get_paragraph_input_loader experiment_name run_epoch desc CrossEntropyLoss range format print_model_params num_epochs checkpoint AdamW load_checkpoint evaluate_doc_model train_log_interval makedirs sorted endswith strip append split sorting startswith append range len range len | # plotmachines Cleaned up version of the PlotMachines code ### Preprocessing code: code located in src/preprocessing (follow instructions in the README in that directory) ### PlotMachines model: code located in src/model (follow instructions in the README in that directory) | 2,331 |
hrayrhar/T-CorEx | ['time series'] | ['Efficient Covariance Estimation from Temporal Data'] | scripts/run_syn_smooth.py tcorex/experiments/misc.py tcorex/covariance.py tests/test_corex.py scripts/blessing_of_dimensionality.py scripts/run_stocks.py tests/test_tcorex.py scripts/run_syn_sudden.py tcorex/experiments/vis_utils.py tcorex/base.py tcorex/__init__.py tcorex/experiments/data.py scripts/run_portfolio_optimization.py tcorex/corex.py scripts/scalability-plot.py examples/sample_run.py tcorex/experiments/fmri_utils.py scripts/append_json.py setup.py tcorex/experiments/baselines.py tcorex/tcorex.py tcorex/tcorex_learnable.py main main main main main main main load g TCorexBase mean_impute save to_numpy g_inv get_u_from_w Corex get_w_from_u _compute_diff_row_norms _inverse compute_diff_row_norms _diag_from_left _diag_from_right calculate_nll_score frob_diffs_given_factors _compute_diff_norm_fro _compute_inverses diffs _estimate_diff_norm spectral_diffs_given_factors reorder TCorex TCorexLearnable entropy Diagonal SparsePCA FactorAnalysis TCorex GroundTruth PCA GraphLasso Baseline BigQUIC LVGLASSO OAS QUIC LinearCorex LTGL TimeVaryingGraphLasso LedoitWolf sample_from_modular generate_modular generate_general modular_sufficient_params make_buckets generate_approximately_modular load_modular_sudden_change load_trading_economics load_sp500 load_modular_smooth_change modular_matrix_from_params plot_clusters_probabilistic plot_most_important compute_variance_of_cluster plot_least_varying plot_clusters plot_biggest make_sure_path_exists plot_for_next_timestep plot_cov_matrix test_corex test_tcorex_on_synthetic_data test_tcorex_real_data TCorex format subplots set_title plot print get_covariance set_xlabel clusters colorbar select load_modular_sudden_change imshow diffs savefig set_ylabel save fit adjusted_rand_score generate_approximately_modular ArgumentParser PyCorex output_dir argmax snr str n_observed Corex parse_args set_defaults n_hidden n_samples choice make_sure_path_exists join add_argument repeat float64 train_cnt qp linspace load_sp500 exists percentile name ones val_cnt shape savetxt prefix nt concatenate start_period astype mean matrix date noise_var len start_date end_date test_cnt commodities evaluate log_return data_type max_std n_segments load_modular_smooth_change min_std bs m shuffle timeit min set add nvs append tanh clip arctanh clip T where mean nan append enumerate len cpu requires_grad detach format print make_sure_path_exists dirname ws to_numpy get_weights len append norm range len sorted list range len shape range zeros_like shape range zeros_like T _diag_from_right inv dot shape eye cholesky _inverse sum range len normal T norm reshape dot range format print _compute_inverses _estimate_diff_norm range len svd T norm dot trace inner sum format print _compute_diff_norm_fro _compute_inverses range len T dot shape zeros sum range _compute_diff_row_norms format print _compute_inverses range len normal ones sign sqrt uniform zeros range len zeros array modular_matrix_from_params modular_sufficient_params int normal ones choice sqrt append zeros array range make_spd_matrix list reshape shuffle zeros range seed reshape generate_modular range seed multivariate_normal append zeros float range modular_matrix_from_params concat drop fillna seed list columns sorted shape dirname append fit_transform range shuffle pivot_table zip join sort_index print read_pickle dropna StandardScaler array read_csv len drop fillna seed list columns sorted shape dirname append fit_transform range shuffle pivot_table join sort_index print read_pickle dropna StandardScaler array len list append array range enumerate len print argsort scatter figure gca max range print copy argsort scatter figure gca sum max range print argsort scatter figure gca sum max range int plot_prob_atlas shape Nifti1Image zeros range int shape Nifti1Image zeros range dirname makedirs show colorbar imshow title figure show list format xlabel print ylabel mean bar title ylim xticks range len load join list format calculate_nll_score print get_covariance tqdm mean realpath shape Corex dirname append range fit load join list format TCorex calculate_nll_score print get_covariance tqdm mean realpath shape dirname append range fit load join list format TCorex calculate_nll_score print get_covariance tqdm mean realpath shape dirname append range fit | # Correlation Explanation Methods
Official implementation of linear correlation explanation (linear CorEx) and temporal correlation explanation (T-CorEx) methods.
#### Linear CorEx
Linear CorEx searches for independent latent factors that explain all correlations between observed variables, while also biasing
the model selection towards modular latent factor models – directed latent factor graphical models
where each observed variable has a single latent variable as its only parent.
This is useful for covariance estimation, clustering related variables, and dimensionality reduction, especially in the high-dimensional and under-sampled regime.
The complete description of the method is presented in NeurIPS 2019 [paper](https://papers.nips.cc/paper/9691-fast-structure-learning-with-modular-regularization) *"Fast structure learning with modular regularization"* by Greg Ver Steeg, Hrayr Harutyunyan, Daniel Moyer, and Aram Galstyan.
If you want to cite this paper, please use the following BibTex entry:
| 2,332 |
hsfzxjy/dprn | ['person re identification'] | ['ABD-Net: Attentive but Diverse Person Re-Identification'] | torchreid/regularizers/param_controller.py torchreid/datasets/veri.py torchreid/models/senet.py torchreid/eval_cylib/setup.py torchreid/utils/loggers.py torchreid/models/__init__.py torchreid/eval_metrics.py torchreid/datasets/__init__.py torchreid/losses/__init__.py torchreid/datasets/prid450s.py torchreid/datasets/cuhk01.py torchreid/regularizers/SO.py torchreid/losses/spectral_loss.py torchreid/datasets/dukemtmcvidreid.py torchreid/losses/incidence_xent_loss.py torchreid/eval_cylib/test_cython.py torchreid/models/resnetmid.py torchreid/data_manager.py torchreid/regularizers/NR.py torchreid/datasets/msmt17.py torchreid/datasets/sensereid.py torchreid/losses/batch_spectral_loss.py torchreid/dataset_loader.py torchreid/losses/incidence_loss.py torchreid/losses/of_penalty.py torchreid/datasets/cub_200_2011.py torchreid/losses/sa_loss.py torchreid/utils/reidtools.py torchreid/models/resnet.py torchreid/datasets/prid2011.py torchreid/models/shufflenet.py torchreid/losses/center_loss.py torchreid/models/resnext.py torchreid/samplers.py torchreid/losses/lowrank_loss.py torchreid/models/pcb.py torchreid/losses/singular_triplet_loss.py torchreid/datasets/viper.py torchreid/datasets/bases.py torchreid/models/xception.py torchreid/losses/hard_mine_triplet_loss.py torchreid/utils/environ.py torchreid/datasets/dukemtmcreid.py torchreid/datasets/ilidsvid.py torchreid/components/attention.py torchreid/__init__.py args.py torchreid/models/mlfn.py torchreid/components/dropout.py torchreid/datasets/mars.py torchreid/models/inceptionresnetv2.py torchreid/models/mobilenetv2.py eval_acc.py torchreid/models/hacnn.py torchreid/multi_data_manager.py torchreid/regularizers/SVMO.py torchreid/datasets/cuhk03.py torchreid/models/mudeep.py torchreid/models/squeezenet.py torchreid/utils/iotools.py torchreid/models/nasnet.py torchreid/models/inceptionv4.py torchreid/datasets/vehicleid.py train.py torchreid/optimizers.py torchreid/transforms.py torchreid/regularizers/__init__.py torchreid/datasets/valset.py torchreid/regularizers/SVDO.py tester_multi.py torchreid/losses/cross_entropy_loss.py torchreid/models/densenet.py torchreid/datasets/ilids.py torchreid/utils/avgmeter.py torchreid/datasets/market1501_d.py torchreid/datasets/dukemtmcreid_d.py train_multi.py torchreid/utils/nuc_norm.py torchreid/losses/ring_loss.py torchreid/datasets/market1501.py torchreid/datasets/grid.py torchreid/components/branches.py torchreid/components/shallow_cam.py torchreid/utils/torchtools.py argument_parser image_dataset_kwargs optimizer_kwargs video_dataset_kwargs main extract_train_info accuracy get_criterions main test accuracy get_criterion test_classification accuracy get_criterion main train test_reid test_classification accuracy get_criterion main train test_reid ImageDataset read_image VideoDataset ImageDataManager VideoDataManager BaseDataManager eval_market1501 evaluate_py eval_cuhk03 evaluate ImageDataManager BaseDataManager init_optimizer RandomIdentitySampler RandomErasing build_transforms Random2DTranslation build_training_transforms CAM_Module PAM_Module get_attention_module_instance Identity DANetHead NPBranch Sequential MultiBranchNetwork DANBranch ABDBranch GlobalBranch SimpleDropoutOptimizer DropoutOptimizer ShallowCAM BaseDataset BaseImageDataset BaseVideoDataset CUB_200_2011 CUHK01 CUHK03 DukeMTMCreID DukeMTMCreID_D DukeMTMCVidReID GRID iLIDS iLIDSVID Market1501 Market1501_D Mars MSMT17 PRID2011 PRID450S SenseReID ValSet VehicleID VeRi VIPeR init_imgreid_dataset init_vidreid_dataset numpy_include BatchSpectralLoss CenterLoss CrossEntropyLoss TripletLoss IncidenceLoss IncidenceXentLoss LowRankLoss OFPenalty RingLoss sa_loss SingularTripletLoss SpectralLoss DeepSupervision _copy_dense_layer DenseNet densenet121_backbone _make_densenet init_pretrained_weights densenet161_backbone DenseNetDeepBranch _DenseLayer _DenseBlock DummyFD _Transition MultiBranchDenseNet DenseNetCommonBranch HACNN InceptionB ChannelAttn SoftAttn SpatialAttn HarmAttn HardAttn InceptionA ConvBlock Block17 Block8 Mixed_6a Mixed_5b BasicConv2d InceptionResNetV2 inceptionresnetv2 Block35 Mixed_7a Mixed_4a Mixed_5a Reduction_B Inception_B init_pretrained_weights BasicConv2d Inception_A InceptionV4Base Reduction_A Mixed_3a Inception_C inceptionv4 MLFNBlock mlfn MLFN init_pretrained_weights Bottleneck ConvBlock MobileNetV2 Reduction ConvLayers MultiScaleB MultiScaleA MuDeep Fusion ConvBlock NormalCell BranchSeparablesStem AvgPoolPad ReductionCell1 ReductionCell0 MaxPoolPad init_pretrained_weights nasnetamobile SeparableConv2d BranchSeparables CellStem0 FirstCell BranchSeparablesReduction CellStem1 NASNetAMobile pcb_p6 init_pretrained_weights Bottleneck pcb_p4 conv3x3 PCB BasicBlock DimReduceLayer MultiBranchMGNLikeResNet resnet50_backbone resnet50_cls ResNet resnet50 init_pretrained_weights ResNetDeepBranch Bottleneck ResNetMGNLikeCommonBranch ResNetCommonBranch resnet50_mgn_like conv3x3 ResNetMGNLikeDeepBranch MultiBranchResNetCls MultiBranchResNet BasicBlock ResNet resnet50mid init_pretrained_weights Bottleneck conv3x3 BasicBlock ResNeXtBottleneck resnext50_32x4d init_pretrained_weights resnext101_32x4d ResNeXt se_resnext50_32x4d senet154 SENet SEResNetBottleneck SEBottleneck SEResNeXtBottleneck se_resnet50_fc512 init_pretrained_weights Bottleneck se_resnet152 se_resnet50 se_resnext101_32x4d SEModule se_resnet101 ShuffleNet Bottleneck ChannelShuffle SqueezeNet squeezenet1_1 squeezenet1_0_fc512 init_pretrained_weights Fire squeezenet1_0 Block init_pretrained_weights xception SeparableConv2d Xception init_model get_names NoneRegularizer ParamController HtriParamController SORegularizer SVDORegularizer SVMORegularizer ConvRegularizer get_regularizer AverageMeter get_env_or_raise check_isfile read_json save_checkpoint write_json mkdir_if_missing RankLogger Logger binv compute_error msqrt SymNucNorm nuclear_norm _apply_func NucNorm generate_symm_matrix _functional_nuc_norm visualize_ranked_results open_specified_layers set_bn_to_eval count_num_param adjust_learning_rate open_all_layers init_params add_argument ArgumentParser topk isinstance size t eq mul_ expand_as append sum max SingularLoss WrappedCrossEntropyLoss HtriParamController WrappedTripletLoss fix_custom_loss IncidenceXentLoss SpectralLoss IncidenceLoss BatchSpectralLoss label_smooth SingularTripletLoss LowRankLoss init_model extract_train_info MultiStepLR gpu_devices Logger arch save_dir cuda seed regularizer get_regularizer load_state_dict init_optimizer state_dict manual_seed_all update format start_epoch load_weights resume return_dataloaders manual_seed is_available load join print ImageDataManager get_criterions count_num_param parameters num_train_pids use_cpu eval vars TripletLoss CrossEntropyLoss visualize_ranked_results visualize_ranks return_testdataset_by_name test evaluate target_names items list format print AverageMeter eval_result average eval append array save_checkpoint show_summary fixbase_epoch round str step max_epoch RankLogger source_names range set timedelta vars time test_classification write get_criterion open_layers train test_reid OFPenalty model of_penalty zero_grad open_specified_layers regularizer get update format size item vars float enumerate time criterion backward print AverageMeter open_all_layers open_layers step len eval defaultdict get format evaluate print AverageMeter expand t addmm_ eval flip_eval avg savemat numpy test_batch_size convert cumsum list defaultdict shape append sum range format asarray astype choice mean enumerate invert items print float32 argsort int32 zeros len invert format asarray print cumsum astype float32 argsort shape mean int32 append sum range RandomErasing print Random2DTranslation ToTensor set Resize Normalize RandomHorizontalFlip append ColorJitter get print Compose Normalize build_training_transforms lower get_include IncidenceLoss get_env_or_raise CrossEntropyLoss get norm view print size sum SingularLoss WrappedTripletLoss update list format print group load_url match load_state_dict keys compile state_dict init_pretrained_weights DenseNet init_pretrained_weights DenseNet load_url InceptionResNetV2 load_state_dict Linear InceptionV4Base init_pretrained_weights MLFN init_pretrained_weights NASNetAMobile init_pretrained_weights PCB init_pretrained_weights PCB init_pretrained_weights ResNet init_pretrained_weights resnet50_backbone resnet50_backbone resnet50_backbone ResNet init_pretrained_weights init_pretrained_weights ResNeXt init_pretrained_weights ResNeXt init_pretrained_weights SENet init_pretrained_weights SENet init_pretrained_weights SENet init_pretrained_weights SENet init_pretrained_weights SENet init_pretrained_weights SENet init_pretrained_weights SENet SqueezeNet init_pretrained_weights SqueezeNet init_pretrained_weights SqueezeNet init_pretrained_weights init_pretrained_weights Xception get checker makedirs print format isfile dirname mkdir_if_missing get join copy dirname save mkdir_if_missing sqrt _sum rand bmm sqrt div repeat expand_as range stack bmm msqrt size expand permute __func view join format basename _cp_img_to print argsort shape range mkdir_if_missing isinstance Conv2d bias normal_ kaiming_normal_ modules BatchNorm1d BatchNorm2d weight constant_ Linear param_groups eval __name__ parameters train named_children isinstance print parameters eval DataParallel train module DataParallel sum module isinstance | # ABD-Net: Attentive but Diverse Person Re-Identification [](https://paperswithcode.com/sota/person-re-identification-on-msmt17?p=abd-net-attentive-but-diverse-person-re) [](https://paperswithcode.com/sota/person-re-identification-on-dukemtmc-reid?p=abd-net-attentive-but-diverse-person-re) [](https://paperswithcode.com/sota/person-re-identification-on-market-1501?p=abd-net-attentive-but-diverse-person-re) Code for this paper [ABD-Net: Attentive but Diverse Person Re-Identification](https://arxiv.org/abs/1908.01114) Tianlong Chen, Shaojin Ding\*, Jingyi Xie\*, Ye Yuan, Wuyang Chen, Yang Yang, Zhou Ren, Zhangyang Wang In ICCV 2019 Refer to **Training Guides README** [here](./README_Training_and_Testing_Guides.md), original README [here](./README_ORIG.md), datasets README [here](./DATASETS.md), Model ZOO README [here](https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO.html). We provide complete usage pretrained models for our paper. - Market1501 [best ABD-Net model](https://drive.google.com/file/d/13bvg8LUD7JinC-usAQyE2ys9zxo4P97e/view?usp=sharing) - Duke [best ABD-Net model](https://drive.google.com/file/d/1ojw0wva6ZlcY0v4UjiwesCyiFonZgBz8/view?usp=sharing) - MSMT17 [best ABD-Net model](https://drive.google.com/file/d/1_ZpSfOxrid9xpSecAxEA2WAa6h-uWc1O/view?usp=sharing) | 2,333 |
hszhao/semseg | ['scene parsing', 'semantic segmentation'] | ['PSANet: Point-wise Spatial Attention Network for Scene Parsing'] | lib/psa/src/__init__.py util/util.py tool/demo.py lib/psa/modules/__init__.py lib/psa/functional.py lib/psa/functions/__init__.py tool/train.py util/config.py model/resnet.py lib/psa/functions/psamask.py util/dataset.py lib/psa/modules/psamask.py model/pspnet.py model/psanet.py util/transform.py tool/test.py psa_mask PSAMask PSAMask PSANet PSA PPM PSPNet ResNet resnet50 Bottleneck resnet152 conv3x3 resnet34 resnet18 BasicBlock resnet101 check test net_process scale_process get_parser main get_logger check test net_process cal_acc scale_process get_parser main get_logger validate main_process check get_parser main_worker main train get_logger worker_init_fn load_cfg_from_cfg_file merge_cfg_from_list CfgNode _assert_with_logging _decode_cfg_value _check_and_coerce_cfg_value_type is_image_file SemData make_dataset BGR2RGB Crop RandRotate ToTensor Compose RandomVerticalFlip Resize RandomGaussianBlur Normalize RandomHorizontalFlip RandScale RGB2BGR intersectionAndUnionGPU check_mkdir colorize poly_learning_rate AverageMeter check_makedirs intersectionAndUnion step_learning_rate init_weights find_free_port group_weight load_url ResNet load_state_dict load_url ResNet load_state_dict load ResNet load_state_dict load ResNet load_state_dict load ResNet load_state_dict config load_cfg_from_cfg_file merge_cfg_from_list add_argument image ArgumentParser opts parse_args setFormatter getLogger addHandler StreamHandler Formatter setLevel INFO compact train_w train_h shrink_factor test_w image test_h get_parser cuda PSANet base_size PSPNet load_state_dict get_logger format astype test eval classes info model_path load join check scales isfile sub_ transpose shape flip div_ interpolate softmax zip float numpy cuda cat int copyMakeBorder float min copy shape resize ceil zeros BORDER_CONSTANT max range argmax join uint8 format imwrite info COLOR_BGR2RGB float colorize IMREAD_COLOR shape scale_process save resize zeros imread round cvtColor save_folder DataLoader cal_acc SemData index_step Compose index_start data_list min len squeeze transpose update check_makedirs eval enumerate time AverageMeter numpy len update join val format AverageMeter intersectionAndUnion mean IMREAD_GRAYSCALE info imread sum range enumerate len seed manual_seed seed manual_seed_all int train_gpu world_size spawn multiprocessing_distributed ngpus_per_node manual_seed main_worker find_free_port workers validate batch_size multiprocessing_distributed SGD DataParallel DistributedDataParallel DataLoader save cuda str PSANet PSPNet set_device SemData DistributedSampler epochs rank save_freq load_state_dict append get_logger CrossEntropyLoss range SummaryWriter format main_process init_process_group sync_bn save_path Compose start_epoch distributed resume classes info load int batch_size_val remove evaluate set_epoch dict convert_sync_batchnorm isfile train weight add_scalar model multiprocessing_distributed index_split zero_grad aux_weight ignore_label base_lr cuda epochs new_tensor sum range update val format main_process zoom_factor param_groups size mean avg classes item info long enumerate int time intersectionAndUnionGPU backward add_scalar poly_learning_rate AverageMeter divmod step len model multiprocessing_distributed ignore_label interpolate cuda new_tensor sum range update val format main_process size mean eval classes item info enumerate time intersectionAndUnionGPU criterion AverageMeter len items list CfgNode deepcopy zip _decode_cfg_value setattr _check_and_coerce_cfg_value_type literal_eval append type conditional_cast debug lower join format print strip readlines split append len float reshape size histogram copy histc view mkdir makedirs _ConvNd isinstance named_parameters bias normal_ _BatchNorm weight modules xavier_normal_ LSTM kaiming_normal_ constant_ Linear _ConvNd isinstance bias dict _BatchNorm modules append weight Linear convert putpalette socket bind close AF_INET SOCK_STREAM | # PyTorch Semantic Segmentation ### Introduction This repository is a PyTorch implementation for semantic segmentation / scene parsing. The code is easy to use for training and testing on various datasets. The codebase mainly uses ResNet50/101/152 as backbone and can be easily adapted to other basic classification structures. Implemented networks including [PSPNet](https://hszhao.github.io/projects/pspnet) and [PSANet](https://hszhao.github.io/projects/psanet), which ranked 1st places in [ImageNet Scene Parsing Challenge 2016 @ECCV16](http://image-net.org/challenges/LSVRC/2016/results), [LSUN Semantic Segmentation Challenge 2017 @CVPR17](https://blog.mapillary.com/product/2017/06/13/lsun-challenge.html) and [WAD Drivable Area Segmentation Challenge 2018 @CVPR18](https://bdd-data.berkeley.edu/wad-2018.html). Sample experimented datasets are [ADE20K](http://sceneparsing.csail.mit.edu), [PASCAL VOC 2012](http://host.robots.ox.ac.uk:8080/leaderboard/displaylb.php?challengeid=11&compid=6) and [Cityscapes](https://www.cityscapes-dataset.com). <img src="./figure/pspnet.png" width="900"/> ### Update - 2020.05.15: Branch `master`, use official [nn.SyncBatchNorm](https://pytorch.org/docs/master/nn.html#torch.nn.SyncBatchNorm), only multiprocessing training is supported, tested with pytorch 1.4.0. - 2019.05.29: Branch `1.0.0`, both multithreading training ([nn.DataParallel](https://pytorch.org/docs/stable/nn.html#dataparallel)) and multiprocessing training ([nn.parallel.DistributedDataParallel](https://pytorch.org/docs/stable/_modules/torch/nn/parallel/distributed.html)) (**recommended**) are supported. And the later one is much faster. Use `syncbn` from [EncNet](https://github.com/zhanghang1989/PyTorch-Encoding) and [apex](https://github.com/NVIDIA/apex), tested with pytorch 1.0.0. ### Usage 1. Highlight: - Fast multiprocessing training ([nn.parallel.DistributedDataParallel](https://pytorch.org/docs/stable/_modules/torch/nn/parallel/distributed.html)) with official [nn.SyncBatchNorm](https://pytorch.org/docs/master/nn.html#torch.nn.SyncBatchNorm). | 2,334 |
htconquer/BGAN | ['image retrieval'] | ['Binary Generative Adversarial Networks for Image Retrieval'] | deconv.py BGAN.py create_S.py generator.py evaluate.py utils.py generator saveB average_gradients data_iterator sigmoid discriminator inference loss deconv2d get2d_deconv_output_size _kernel _stride cal_map load_image Opt data_to_tensor list savez imresize print concatenate len tqdm append loadmat array range run arange astype shuffle range len flatten fully_connected random_normal exp transpose square matmul sign reduce_sum reduce_mean clip_by_value log concat reduce_mean zip append expand_dims int value as_dimension isinstance isinstance int imread min resize slice_input_producer | # BGAN To run this codes, you should do follow things:<p> 1. extract resnet feature and then run create_S.py to construct similarity matrix.<br> You can download cifar-10.mat <a href="https://drive.google.com/open?id=0Bzg9TvY-s7y2Zy1CQklaTTJQdUU"> (here) </a>training data and cifar_KNN.npz from <a href="https://drive.google.com/open?id=0Bzg9TvY-s7y2WFFlc3F0T2RkalE"> (here) </a> 2. download vgg19 pretrained model on ImageNet based tensorflow <a href='https://drive.google.com/open?id=0Bzg9TvY-s7y2UE12NVR6MEpxNUE'> here </a>. 3. run this command 'python BGAN.py 32' to train network and then generate 32 bit codes 4. after training done, you can run evalutate.py to calculate MAP. HIT: you need to change some paths in evalutate.py. | 2,335 |
huanzhang12/CROWN-Robustness-Certification | ['adversarial attack'] | ['Provable defenses against adversarial examples via the convex outer adversarial polytope'] | get_bounds_quad.py setup_cifar.py get_bounds_others.py get_bounds_ours.py quad_fit.py save_nlayer_weights.py setup_mnist.py train_nlayer.py main.py general_fit.py utils.py get_bounds_general.py relu_ub_p act_sigmoid_d act_sigmoid relu_ub_n general_ub_pn_scalar find_d_LB act_arctan plot_line general_lb_n act_tanh general_ub_n relu_ub_pn find_d_UB act_arctan_d general_ub_p relu_lb_n general_lb_pn general_lb_pn_scalar general_lb_p act_tanh_d relu_lb_pn relu_lb_p general_ub_pn get_sigmoid_bounds get_arctan_bounds get_general_bounds init_layer_bound_relax_matrix_huan_general get_tanh_bounds get_layer_bound_relax_adaptive_matrix_huan_general_optimized get_relu_bounds get_layer_bound_LP spectral_bound compute_worst_bound_multi fast_compute_max_grad_norm get_layer_bound_relax_adaptive_matrix_huan_optimized inc_counter get_layer_bound_relax_matrix_huan_optimized compute_max_grad_norm get_layer_bound_relax_matrix_huan compute_worst_bound init_layer_bound_relax_matrix_huan ReLU get_weights_list fast_compute_max_grad_norm_2layer get_layer_bound fast_compute_max_grad_norm_2layer_next proj_l2 proj_l1 proj_li get_layer_bound_quad get_layer_bound_quad_both get_lower_quad_parameterized_lgtu get_lower_quad_parameterized get_area plot_parameterized get_best_upper_quad get_upper_quad_parameterized get_best_lower_quad get_lower_quad_parameterized_lltu NLayerModel TwoLayerCIFARModel CIFARModel CIFAR load_batch MNIST extract_data TwoLayerMNISTModel extract_labels MNISTModel MadryMNISTModel train show l0_dist generate_data l2_dist l1_dist linf_dist zeros_like cosh func dfunc func dfunc func func empty_like func find_d_UB range len find_d_LB empty_like func range len func find_d_UB func find_d_LB range diff range diff plot ub_pn lb_pn lb_p lb_n ub_n ub_p ub_pn lb_pn lb_p lb_n ub_n ub_p ub_pn lb_pn lb_p lb_n ub_n ub_p ub_pn lb_pn lb_p lb_n ub_n ub_p ub_pn lb_pn lb_p lb_n ub_n ub_p ones empty range len norm zeros_like empty_like copy dot get_bounds range int norm format inf print ones shape zip append expand_dims range len addVar setParam MAXIMIZE str LinExpr Model objVal append range update format astype empty_like MINIMIZE time optimize print addConstr float32 setObjective reset len format print transpose U ascontiguousarray shape append get_weights enumerate sum norm empty_like dot abs max range ones empty range len sum norm zeros_like astype maximum float32 copy empty_like dot abs max range sum norm zeros_like astype maximum float32 copy empty_like dot abs max range list fill astype float32 maximum reversed dot copy empty_like append range zeros_like astype float32 maximum dot zeros range fast_compute_max_grad_norm_2layer range maximum zeros abs fast_compute_max_grad_norm_2layer range fast_compute_max_grad_norm_2layer_next len norm inc_counter zeros empty range format print compute_worst_bound any linspace range tuple compute_max_grad_norm ReLU get_layer_bound_relax_adaptive_matrix_huan_general_optimized max ones append expand_dims sum get_layer_bound_quad range get_layer_bound format inf astype fast_compute_max_grad_norm copy empty_like init_layer_bound_relax_matrix_huan flush minimum ord get_layer_bound_LP print min float32 get_layer_bound_relax_adaptive_matrix_huan_optimized maximum get_layer_bound_relax_matrix_huan_optimized init_layer_bound_relax_matrix_huan_general zeros len norm arange cumsum float32 empty_like maximum abs len zeros_like proj_li grad_ub f_qp_lb range inf astype empty_like grad_lb f_qp_ub T norm print maximum float32 dot proj_l2 proj_l1 get_best_lower_quad len zeros_like get_best_lower_quad get_best_upper_quad grad_ub f_qp_lb range inf astype empty_like grad_lb f_qp_ub T norm print float32 maximum dot proj len print get_area format print get_area format get_upper_quad_parameterized func linspace plot read transpose fromstring append range format print compile Lambda Sequential fit SGD add Dense load_weights train_labels save summary train_data Activation Flatten len fromarray join print squeeze flatten around save range flatten argmax str list predictor squeeze argmin len savetxt append range format set sample join print extend eye randint array makedirs | **As requested by IBM, this repository is moved to https://github.com/IBM/CROWN-Robustness-Certification, but we aim to keep both repositories synced up.** The code is released under Apache License v2. CROWN: A Neural Network Verification Framework -------------------- We proposed a new framework, **CROWN**, to **certify** robustness of neural networks with **general activation functions** including but not limited to ReLU, tanh, sigmoid, arctan, etc. **CROWN** is efficient and can deliver lower bounds of minimum adversarial distortions with guarantees (the so-called **certified lower bound** or **certified robustness**). We compare **CROWN** with various certified lower bounds methods including [Global Lipschitz constant](https://arxiv.org/pdf/1312.6199.pdf) and [Fast-Lin](https://github.com/huanzhang12/CertifiedReLURobustness) and show that **CROWN** can certify much large lower bound than the Global Lipschitz constant based approach while improve the quality (up to 28%) of robustness lower bound on ReLU networks of state-of-the-art robustness certification algorithms Fast-Lin. We also compare **CROWN** with robustness score estimate [CLEVER](https://github.com/huanzhang12/CLEVER) and adversarial attack methods ([CW](https://github.com/carlini/nn_robust_attacks),[EAD](https://github.com/ysharma1126/EAD_Attack)). Please See Section 4 and Appendix E of our paper for more details. Cite our work: Huan Zhang\*, Tsui-Wei Weng\*, Pin-Yu Chen, Cho-Jui Hsieh and Luca Daniel, "[**Efficient Neural Network Robustness Certification with General Activation Functions**](http://arxiv.org/abs/1811.00866)", NuerIPS 2018. (\* Equal Contribution) ``` @inproceedings{zhang2018crown, author = "Huan Zhang AND Tsui-Wei Weng AND Pin-Yu Chen AND Cho-Jui Hsieh AND Luca Daniel", | 2,336 |
huawenwei4/dfl | ['face swapping'] | ['DeepFaceLab: Integrated, flexible and extensible face-swapping framework'] | models/Model_XSeg/Model.py mainscripts/VideoEd.py merger/MergerScreen/MergerScreen.py mainscripts/dev_misc.py merger/InteractiveMergerSubprocessor.py samplelib/SampleGeneratorImageTemporal.py core/leras/models/__init__.py samplelib/Sample.py core/joblib/__init__.py core/imagelib/estimate_sharpness.py core/joblib/MPFunc.py core/leras/layers/DenseNorm.py core/leras/nn.py samplelib/SampleProcessor.py core/leras/archis/__init__.py core/leras/layers/Conv2D.py core/stdex.py core/imagelib/text.py core/leras/models/PatchDiscriminator.py core/leras/__init__.py models/Model_XSeg/__init__.py core/imagelib/equalize_and_stack_square.py samplelib/__init__.py core/imagelib/sd/draw.py core/randomex.py core/joblib/MPClassFuncOnDemand.py core/leras/optimizers/__init__.py core/leras/initializers/__init__.py mainscripts/Extractor.py mainscripts/Sorter.py core/mathlib/__init__.py core/leras/optimizers/OptimizerBase.py core/imagelib/draw.py samplelib/SampleGeneratorBase.py core/leras/layers/BatchNorm2D.py core/leras/layers/ScaleAdd.py XSegEditor/QCursorDB.py models/__init__.py core/joblib/SubprocessGenerator.py DFLIMG/DFLIMG.py core/leras/layers/__init__.py models/ModelBase.py facelib/FaceEnhancer.py core/joblib/ThisThreadGenerator.py XSegEditor/QIconDB.py core/imagelib/blursharpen.py merger/MergeMasked.py core/cv2ex.py models/Model_Quick96/__init__.py main.py core/leras/archis/ArchiBase.py localization/localization.py facelib/LandmarksProcessor.py core/leras/layers/Dense.py models/Model_SAEHD/__init__.py mainscripts/Merger.py mainscripts/XSegUtil.py core/imagelib/filters.py XSegEditor/QStringDB.py core/leras/initializers/CA.py core/imagelib/__init__.py core/leras/models/ModelBase.py samplelib/SampleGeneratorFaceTemporal.py core/leras/layers/InstanceNorm2D.py facelib/S3FDExtractor.py samplelib/SampleLoader.py samplelib/SampleGeneratorFaceXSeg.py core/qtex/qtex.py DFLIMG/__init__.py core/leras/optimizers/RMSprop.py core/structex.py core/leras/layers/BlurPool.py facelib/__init__.py core/leras/archis/DeepFakeArchi.py core/imagelib/SegIEPolys.py mainscripts/FacesetEnhancer.py merger/FrameInfo.py models/Model_SAEHD/Model.py core/mplib/__init__.py samplelib/SampleGeneratorFacePerson.py merger/MergerScreen/__init__.py core/imagelib/common.py core/imagelib/sd/__init__.py core/leras/layers/AdaIN.py core/leras/layers/LayerBase.py facelib/FANExtractor.py core/leras/models/XSeg.py samplelib/SampleGeneratorFace.py merger/MergeAvatar.py core/pathex.py core/leras/layers/Conv2DTranspose.py core/qtex/QXMainWindow.py merger/__init__.py core/imagelib/morph.py core/joblib/SubprocessorBase.py mainscripts/Trainer.py samplelib/PackedFaceset.py core/mathlib/umeyama.py mainscripts/Util.py core/interact/__init__.py core/leras/layers/FRNorm2D.py core/leras/layers/Saveable.py core/leras/layers/TLU.py facelib/XSegNet.py core/mplib/MPSharedList.py core/leras/models/CodeDiscriminator.py facelib/FaceType.py XSegEditor/XSegEditor.py merger/MergerConfig.py core/leras/ops/__init__.py core/interact/interact.py core/leras/device.py core/imagelib/sd/calc.py core/imagelib/reduce_colors.py samplelib/SampleGeneratorImage.py localization/__init__.py DFLIMG/DFLJPG.py core/qtex/__init__.py core/qtex/QSubprocessor.py core/qtex/QXIconButton.py samplelib/SampleGeneratorFaceCelebAMaskHQ.py core/imagelib/warp.py core/osex.py core/imagelib/color_transfer.py models/Model_Quick96/Model.py process_dev_test process_merge process_videoed_cut_video process_train process_faceset_enhancer process_xsegeditor process_xsegapply process_xsegremove process_xsegremovelabels process_videoed_video_from_sequence process_xsegfetch process_util process_extract fixPathAction process_videoed_extract_video process_sort bad_args process_videoed_denoise_image_sequence cv2_imwrite cv2_imread set_process_dpi_aware get_screen_size set_process_lowest_prio get_image_paths move_all_files write_bytes_safe get_first_file_by_stem get_image_unique_filestem_paths get_all_dir_names get_file_paths delete_all_files scantree get_paths get_all_dir_names_startswith random_normal suppress_stdout_stderr struct_unpack LinearMotionBlur blursharpen _scale_array color_transfer color_transfer_idt color_transfer_mkl reinhard_color_transfer laplacian_matrix lab_image_stats seamless_clone linear_color_transfer channel_hist_match color_transfer_sot color_transfer_mix color_hist_match overlay_alpha_image cut_odd_image normalize_channels draw_polygon draw_rect equalize_and_stack_square compute _calculate_sharpness_metric marziliano_method get_block_contrast _simple_thinning estimate_sharpness is_edge_block sobel apply_random_motion_blur apply_random_rgb_levels apply_random_hsv_shift apply_random_bilinear_resize apply_random_gaussian_blur morphTriangle morph_by_points applyAffineTransform reduce_colors SegIEPolys SegIEPolyType SegIEPoly get_text_image draw_text_lines draw_text _get_pil_font get_draw_text_lines warp_by_params gen_warp_params dist_to_edges random_circle_faded circle_faded InteractBase InteractColab InteractDesktop MPClassFuncOnDemand MPFunc SubprocessGenerator Subprocessor ThisThreadGenerator Devices Device nn ArchiBase DeepFakeArchi CAInitializerSubprocessor initializers AdaIN BatchNorm2D BlurPool Conv2D Conv2DTranspose Dense DenseNorm FRNorm2D InstanceNorm2D LayerBase Saveable ScaleAdd TLU CodeDiscriminator ModelBase PatchDiscriminator XSeg dssim concat average_gv_list resize2d_bilinear flatten rgb_to_lab space_to_depth tf_gradients random_binomial style_loss gelu init_weights tf_get_value upsample2d reshape_4D batch_set_value max_pool average_tensor_list gaussian_blur depth_to_space OptimizerBase RMSprop umeyama get_power_of_two rotationMatrixToEulerAngles polygon_area ArrayFillerSubprocessor MPSharedList IndexHost Index2DHost ListHost DictHostCli DictHost QSubprocessor QDarkPalette QActionEx QSize_to_np QImage_from_np QImage_to_np QPixmap_from_np QPoint_to_np QPoint_from_np QXIconButton QXMainWindow DFLIMG DFLJPG FaceEnhancer FaceType FANExtractor blur_image_hull_mask mirror_landmarks get_face_struct_mask estimate_pitch_yaw_roll convert_98_to_68 expand_eyebrows get_rect_from_landmarks get_transform_mat draw_rect_landmarks get_cmask transform_points estimate_averaged_yaw calc_face_pitch alpha_to_color get_image_eye_mask draw_landmarks get_image_hull_mask S3FDExtractor XSegNet dev_test_68 dev_test1 dev_resave_pngs extract_vggface2_dataset extract_umd_csv dev_segmented_trash process_folder FacesetEnhancerSubprocessor extract_video video_from_sequence denoise_image_sequence cut_video remove_xseg remove_xseg_labels apply_xseg fetch_xseg FrameInfo InteractiveMergerSubprocessor MergeFaceAvatar process_frame_info MergeMasked MergeMaskedFace MergerConfigMasked MergerConfigFaceAvatar MergerConfig ScreenManager ScreenAssets Screen ModelBase import_model QModel SAEHDModel XSegModel PackedFaceset Sample SampleType SampleGeneratorBase SampleGeneratorFace SampleGeneratorFaceCelebAMaskHQ MaskType SampleGeneratorFacePerson SampleGeneratorFaceTemporal SampleGeneratorFaceXSeg SegmentedSampleFilterSubprocessor SampleGeneratorImage SampleGeneratorImageTemporal FaceSamplesLoaderSubprocessor SampleLoader SampleProcessor QCursorDB QIconDB QStringDB ImagePreviewSequenceBar QUIConfig QCanvasOperator LoaderQSubprocessor CanvasConfig OpMode QCanvas DragType ViewLock ColorScheme QCanvasControlsLeftBar start QCanvasControlsRightBar MainWindow PTEditMode main set_process_lowest_prio main set_process_lowest_prio unpack_faceset pack save_faceset_metadata log_info restore_faceset_metadata_folder pack_faceset save_faceset_metadata_folder restore_faceset_metadata Path input_dir unpack recover_original_aligned_filename set_process_lowest_prio add_landmarks_debug_images main set_process_lowest_prio main set_process_lowest_prio output_ext fps extract_video output_dir input_file set_process_lowest_prio audio_track_id from_time bitrate to_time cut_video input_file set_process_lowest_prio denoise_image_sequence set_process_lowest_prio input_dir video_from_sequence set_process_lowest_prio Path set_process_lowest_prio input_dir process_folder dev_test set_process_lowest_prio input_dir start Path set_process_lowest_prio input_dir model_dir apply_xseg Path input_dir set_process_lowest_prio Path remove_xseg set_process_lowest_prio input_dir remove_xseg_labels Path set_process_lowest_prio input_dir Path fetch_xseg set_process_lowest_prio input_dir print_help exit loader_func asarray bytearray imencode suffix nice SetPriorityClass HANDLE GetCurrentProcess SetProcessDPIAware user32 write_bytes parent name unlink rename exists is_dir scandir str list scandir any Path scantree exists append remove get_image_paths name stem set add verbose_print_func Path exists Path exists Path exists str list lower scandir Path startswith append exists str sorted list path lower scandir Path exists name Path rename get_file_paths unlink Path get_file_paths normal empty prod range calcsize warpAffine ones getRotationMatrix2D zeros sum medianBlur addWeighted ones zeros GaussianBlur max dtype reshape astype copy argsort shape bilateralFilter fill empty range eps T clip reshape eig dot shape sqrt cov mean diag T reshape min astype float32 empty_like solve dot shape histogram interp max range tolil setdiag lil_matrix reshape laplacian_matrix shape flatten dot argwhere append range tocsc _scale_array uint8 astype float32 merge lab_image_stats COLOR_LAB2BGR cvtColor split T reshape transpose mean dot eigh eye cholesky split min max float64 astype shape unique interp ravel dtype astype shape channel_hist_match range uint8 astype float32 COLOR_BGR2LAB color_transfer_sot COLOR_LAB2BGR cvtColor uint8 color_transfer_idt color_transfer_mkl astype float32 reinhard_color_transfer linear_color_transfer color_transfer_sot clip shape repeat len shape shape range tuple line range len draw_polygon concatenate shape resize expand_dims max enumerate T convolve square mean sqrt array shape zeros float64 marziliano_method astype canny sobel gradient atan2 shape any zeros round range int exp slice get_block_contrast shape flipud round zeros is_edge_block rot90 range cvtColor COLOR_BGR2GRAY rand random clip array COLOR_HSV2BGR random merge COLOR_BGR2HSV randint clip cvtColor split LinearMotionBlur randint random randint GaussianBlur random int rand random shape resize INTER_LINEAR float32 getAffineTransform float32 fillConvexPoly shape boundingRect int32 applyAffineTransform zeros expand_dims array shape morphTriangle zeros simplices fromarray uint8 convert astype COLOR_RGB2BGR array cvtColor truetype asarray Draw get_default_ttf_font_name concatenate text new _get_pil_font shape clip draw_text range len draw_text_lines zeros shape T random astype copy float32 getRotationMatrix2D dict uniform linspace random_normal warpAffine remap norm clip einsum concatenate norm reshape empty abs clip max random randint initializer print inputs append batch_set_value run gradients expand_dims __enter__ __exit__ enumerate reduce_mean __enter__ __exit__ concat pow tanh sqrt pi as_list reshape tile print transpose conv2d_spatial_axes resize transpose reshape transpose randint float32 pad make_kernel tile depthwise_conv2d gaussian_blur dtype constant arange reshape float32 square reduce_mean reducer cast softmax tile as_list reshape transpose as_list reshape transpose constant reshape multiply matmul cast svd T ones matrix_rank mean dot eye sum diag sqrt atan2 shape Format_Grayscale8 Format_BGR888 Format_ARGB32 height reshape convertToFormat width constBits setsize range squeeze invertAffineTransform shape transform expand_dims get norm getAffineTransform polygon_area astype float32 transform_points sqrt estimate_averaged_yaw array transform_points FULL_NO_ALIGN get_transform_mat float32 array copy concatenate expand_eyebrows fillConvexPoly convexHull zeros int getStructuringElement astype fillConvexPoly MORPH_ELLIPSE convexHull dilate zeros GaussianBlur shape zeros concatenate process copy blend alpha_to_color zeros get_image_hull_mask gdf max clip int blur getStructuringElement min erode argwhere MORPH_ELLIPSE expand_dims copy draw_landmarks zeros expand_eyebrows concatenate polylines tuple shape get_image_hull_mask array circle get_transform_mat draw_rect transform_points draw_polygon draw_landmarks array array rotationMatrixToEulerAngles concatenate astype float32 pi solvePnP zeros array clip get pop get_image_paths parent log_info name stem progress_bar_generator get_all_dir_names Path mkdir run fromString split cv2_imread Path normalize_channels exists input_bool str log_info name stem append get_image_paths get_rect_from_landmarks unlink mkdir parent cv2_imwrite progress_bar_generator read_text split get str get_image_paths parent log_info name len unlink Path mkdir split log_err run range exists fromString input_bool get_image_paths progress_bar_generator get_all_dir_names Path x get_image_paths cv2_imwrite progress_bar_generator cv2_imread Path get_image_paths parent name stem rename Path mkdir append join get_image_paths parent log_info name copy unlink rmtree mkdir BestGPU run update str get_image_paths parent input_str stem output get_first_file_by_stem unlink input_int mkdir Path log_err input run str suffix parent input_str stem overwrite_output input_int log_err Path input max run update str suffix parent progress_bar_generator output input_int rename log_err Path run clip enumerate suffix input_str wait input_int Path max input_bool str stem input update run_async get_image_paths close mkdir parent overwrite_output get_first_file_by_stem log_err probe load extract initialize get_image_paths log_info set_xseg_mask progress_bar_generator astype float32 get_resolution ask_choose_device shape XSegNet resize save load str get_image_paths log_info parent name has_polys progress_bar_generator copy get_seg_ie_polys mkdir load get_image_paths set_xseg_mask log_info progress_bar_generator has_xseg_mask save load get_image_paths log_info input_str has_seg_ie_polys progress_bar_generator save set_seg_ie_polys warpAffine get_transform_mat astype float32 cv2_imread normalize_channels filename clip sharpen_func sharpen_mode concatenate predictor_func add_source_image process_frame_info temporal_face_count append range sharpen_amount predictor_func color_transfer_mkl motion_power bicubic_degrade_power motion_blur_power linear_color_transfer color_transfer_mix boundingRect resize reduce_colors max clip face_enhancer_func hist_match_threshold medianBlur super_resolution_power WARP_INVERSE_MAP ones LinearMotionBlur shape pad blur_mask_modifier image_denoise_power masked_hist_match blursharpen range color_hist_match BORDER_TRANSPARENT warpAffine sharpen_mode xseg_256_extract_func seamlessClone color_transfer_idt astype copy reinhard_color_transfer empty_like motion_deg INTER_CUBIC MORPH_ELLIPSE color_transfer_sot dilate GaussianBlur get_image_hull_mask NORMAL_CLONE uint8 int erode_mask_modifier getStructuringElement get_transform_mat float32 erode argwhere blursharpen_amount color_degrade_power landmarks_list concatenate astype float32 cv2_imread shape normalize_channels MergeMaskedFace filepath clip enumerate str parent cv2_imread locals __import__ globals dict setApplicationName setPalette QDarkPalette Path show str initialize log_info setWindowIcon addApplicationFont AA_EnableHighDpiScaling setStyle setFont gettempdir setAttribute QApplication path_contains app_icon MainWindow exec_ parent QFont raise_ AA_UseHighDpiPixmaps | <table align="center" border="0"><tr><td align="center" width="9999"> # DeepFaceLab <a href="https://arxiv.org/abs/2005.05535"> <img src="https://static.arxiv.org/static/browse/0.3.0/images/icons/favicon.ico" width=14></img> https://arxiv.org/abs/2005.05535</a> ### the leading software for creating deepfakes <img src="doc/DFL_welcome.png" align="center"> </td></tr> <tr><td align="center" width="9999"> <p align="center"> | 2,337 |
hufu6371/DORN | ['depth estimation', 'monocular depth estimation'] | ['Deep Ordinal Regression Network for Monocular Depth Estimation'] | caffe/python/caffe/classifier.py caffe/python/caffe/test/test_net.py caffe/scripts/split_caffe_proto.py caffe/examples/pycaffe/layers/pascal_multilabel_datalayers.py caffe/tools/extra/resize_and_crop_images.py demo_nyuv2.py caffe/examples/pycaffe/caffenet.py caffe/src/caffe/test/test_data/generate_sample_data.py caffe/python/caffe/coord_map.py caffe/python/caffe/test/test_nccl.py caffe/python/detect.py caffe/tools/extra/summarize.py caffe/python/caffe/detector.py caffe/python/draw_net.py caffe/python/train.py caffe/examples/finetune_flickr_style/assemble_data.py caffe/tools/extra/extract_seconds.py caffe/python/caffe/io.py caffe/python/caffe/test/test_layer_type_list.py caffe/python/caffe/__init__.py caffe/examples/pycaffe/layers/pyloss.py caffe/examples/web_demo/app.py caffe/python/classify.py caffe/python/caffe/draw.py caffe/examples/pycaffe/tools.py caffe/pylayer/ordinal_decode_layer.py caffe/python/caffe/test/test_draw.py caffe/scripts/download_model_binary.py caffe/python/caffe/test/test_python_layer_with_param_str.py caffe/tools/extra/parse_log.py caffe/python/caffe/net_spec.py caffe/examples/web_demo/exifutil.py caffe/python/caffe/test/test_python_layer.py caffe/python/caffe/test/test_solver.py caffe/scripts/cpp_lint.py caffe/scripts/copy_notebook.py caffe/python/caffe/test/test_io.py demo_kitti.py caffe/python/caffe/pycaffe.py caffe/python/caffe/test/test_coord_map.py caffe/python/caffe/test/test_net_spec.py depth_prediction depth_prediction download_image make_net max_pool caffenet conv_relu fc_relu CaffeSolver SimpleTransformer print_info check_params PascalMultilabelDataLayerSync load_pascal_annotation BatchLoader EuclideanLossLayer start_tornado start_from_terminal embed_image_html classify_upload index allowed_file ImagenetClassifier classify_url open_oriented_im apply_orientation OrdinalDecodeLayer main main main parse_args train solve time Classifier coord_map UndefinedMapException conv_params coord_map_from_to AxisMismatchException inverse crop_params compose crop Detector get_edge_label draw_net get_layer_lr_mult get_layer_label get_pooling_types_dict choose_color_by_layertype get_pydot_graph draw_net_to_file Transformer blobproto_to_array datum_to_array array_to_blobproto array_to_datum resize_image arraylist_to_blobprotovector_str blobprotovector_str_to_arraylist load_image oversample Layers Function Parameters Top NetSpec assign_proto param_name_dict to_proto _Net_blobs _Net_forward_all _Net_set_input_arrays _Net_backward _Net_params _Net_forward _Net_outputs _Net_forward_backward_all _Net_blob_loss_weights _Net_batch _Net_get_id_name _Net_inputs _Net_layer_dict TestCoordMap coord_net_spec getFilenames TestDraw TestBlobProtoToArray TestArrayToDatum TestLayerTypeList TestNCCL TestLevels TestStages simple_net_file TestNet TestAllInOne lenet TestNetSpec silent_net anon_lenet exception_net_file parameter_net_file SimpleLayer phase_net_file TestPythonLayer ParameterLayer PhaseLayer python_net_file ExceptionLayer SimpleParamLayer TestLayerWithParam python_param_net_file TestSolver ParseNolintSuppressions CheckVlogArguments CheckSectionSpacing FindNextMultiLineCommentEnd ReplaceAll CheckForFunctionLengths _SetOutputFormat _IsTestFilename _VerboseLevel CheckBraces RemoveMultiLineComments ResetNolintSuppressions CheckForNonStandardConstructs _SetVerboseLevel PrintUsage _NestingState CheckIncludeLine CheckAccess _CppLintState Search CheckInvalidIncrement RemoveMultiLineCommentsFromRange CleansedLines CheckForBadCharacters UpdateIncludeState FindPreviousMatchingAngleBracket CheckEmptyBlockBody FindNextMultiLineCommentStart Match _NamespaceInfo CheckMakePairUsesDeduction CheckCheck IsBlankLine _SetFilters ProcessLine _FunctionState CheckPosixThreading GetLineWidth GetHeaderGuardCPPVariable IsCppString _IncludeState CheckSpacing _ClassInfo CheckForCopyright IsErrorSuppressedByNolint ProcessFileData CheckForMultilineCommentsAndStrings CloseExpression _PreprocessorInfo _OutputFormat CheckForIncludeWhatYouUse CheckSpacingForFunctionCall FindEndOfExpressionInLine FindNextMatchingAngleBracket _SetCountingStyle ProcessFile _IncludeError CleanseRawStrings CheckAltTokens CheckForNewlineAtEOF ParseArguments CheckForNonConstReference PrintCategories _Filters main FilesBelongToSameModule CheckCStyleCast FileInfo _BlockInfo CheckForHeaderGuard CheckCaffeDataLayerSetUp ReverseCloseExpression CleanseComments _DropCommonSuffixes _ClassifyInclude CheckStyle CheckCaffeAlternatives FindStartOfExpressionInLine _ShouldPrintError CheckComment Error _GetTextInside CheckLanguage CheckCaffeRandom GetPreviousNonBlankLine reporthook parse_readme_frontmatter model_checks_out valid_dirname get_start_time extract_seconds extract_datetime_from_line get_log_created_year write_csv parse_log fix_initial_nan_learning_rate save_csv_files main parse_args parse_line_for_net_output ResizeCropImagesMapper PILResizeCrop OpenCVResizeCrop print_table printed_len summarize_net main read_net format_param int exp reshape transpose astype float32 copy resize zeros imread forward range imread urlretrieve Convolution InnerProduct Data SoftmaxWithLoss LRN Accuracy max_pool InnerProduct conv_relu fc_relu Dropout join list getElementsByTagName get_data_from_tag csr_matrix dict zip zeros float range enumerate len print format get read info load_image classify_image StringIO join replace info secure_filename save filename open_oriented_im classify_image fromarray replace astype save resize StringIO items listen HTTPServer format print start WSGIContainer update start_tornado add_option OptionParser debug port parse_args ImagenetClassifier forward run hasattr _getexif astype float32 tile apply_orientation open transpose model_def endswith ArgumentParser save mean_file channel_swap output_file dirname expanduser parse_args input_file predict Classifier set_mode_cpu load time isdir print add_argument set_mode_gpu pretrained_model gpu len DataFrame Detector format to_hdf detect_selective_search mean set_index to_csv detect_windows read_csv add_argument ArgumentParser display_lrm read NetParameter output_image_file rankdir Merge TRAIN draw_net_to_file TEST Process str join init_log start append new_uid range log len before_backward layers display add_callback after_backward after_forward Timer append before_forward range len max_iter restore time set_solver_count set_solver_rank add_callback set_device set_multiprocess SGDSolver after_backward set_mode_gpu layer_wise_reduce step bcast NCCL len get params array get params array crop_params conv_params pop collect_bottoms add fn coord_map compose coord_map_from_to items list DESCRIPTOR batch_size str num_output getattr join get_layer_lr_mult name kernel_size stride get_pooling_types_dict pad any append type add_edge get_edge_label list Dot exclude get_layer_label add_node values choose_color_by_layertype Edge Node bottom append type layer include top data array diff shape BlobProto extend flat extend BlobProtoVector ParseFromString BlobProtoVector extend tostring shape Datum flat data len astype float32 tile zoom tuple resize fill empty array concatenate shape tile empty array LayerParameter list NetParameter _to_proto extend Counter OrderedDict values iteritems hasattr isinstance extend add getattr setattr list OrderedDict _blobs _blob_names zip list _blob_loss_weights OrderedDict _blob_names zip _layer_names list layers OrderedDict zip OrderedDict list keys list keys iteritems layers index set outputs _forward len iteritems _backward layers inputs index set len iteritems asarray extend copy next _batch itervalues forward len iteritems asarray backward extend copy next _batch itervalues zip_longest zip forward len ascontiguousarray concatenate itervalues zeros next range len data Pooling pool Convolution NetSpec Deconvolution conv Input join walk dirname abspath NamedTemporaryFile str close write data Pooling pool1 conv2 pool2 ip1 relu1 SoftmaxWithLoss Convolution NetSpec DummyData ip2 ReLU InnerProduct label conv1 Pooling SoftmaxWithLoss Convolution DummyData ReLU InnerProduct data NetSpec DummyData Silence data2 error search add group clear compile compile compile SetOutputFormat SetCountingStyle SetFilters _Filters startswith IsErrorSuppressedByNolint _ShouldPrintError write IncrementErrorCount replace append Match group find startswith endswith range error FindNextMultiLineCommentEnd RemoveMultiLineCommentsFromRange FindNextMultiLineCommentStart rstrip find range len FindEndOfExpressionInLine range len FindStartOfExpressionInLine error min search I range len FileInfo RepositoryName sep sub ParseNolintSuppressions error startswith split GetHeaderGuardCPPVariable enumerate error enumerate error len error replace count error find error find error find error find error Search error match InnermostClass replace error escape Match Search error group Search Check error lines Count End group Begin NumLines Match raw_lines range Search error match group error Match group pop group append Search pop group append Search elided replace CheckSpacingForFunctionCall rfind error len group min CloseExpression NumLines sub find CheckComment Match range Search lines_without_raw_strings error group starting_linenum Match range Search error rfind len group ReverseCloseExpression Search Match CloseExpression find error Match CloseExpression find elided error strip group FindEndOfExpressionInLine find Match range CloseExpression len error Match finditer normalize isinstance PY2 GetLineWidth int InnermostClass CheckCheck error CheckAltTokens CheckBraces CheckSpacing CheckSectionSpacing CheckEmptyBlockBody CheckAccess GetHeaderGuardCPPVariable lines_without_raw_strings _DropCommonSuffixes RepositoryName match split CheckNextIncludeOrder CanonicalizeAlphabeticalOrder FileInfo error search group SetLastHeader match _ClassifyInclude Match pop end search set itervalues append M rstrip replace CheckCStyleCast error _GetTextInside CheckIncludeLine search group lstrip startswith Match ResetSection Search split rfind error group ReverseCloseExpression lstrip findall Match range Search ReplaceAll error Match Search endswith replace setdefault group search CleanseComments open list FilesBelongToSameModule error search copy sub NumLines FullName keys range error search CheckPosixThreading ParseNolintSuppressions CheckVlogArguments CheckMakePairUsesDeduction CheckCaffeDataLayerSetUp CheckLanguage CheckInvalidIncrement CheckCaffeRandom CheckForNonConstReference check_fn Update CheckForNonStandardConstructs CheckStyle raw_lines CheckForMultilineCommentsAndStrings CheckCaffeAlternatives CheckForFunctionLengths CleansedLines _NestingState CheckForBadCharacters CheckForNewlineAtEOF _IncludeState RemoveMultiLineComments CheckForCopyright ResetNolintSuppressions CheckForHeaderGuard NumLines CheckCompletedBlocks CheckForIncludeWhatYouUse range ProcessLine _FunctionState Error rstrip endswith len write ProcessFileData _SetVerboseLevel range split write exit join write exit _VerboseLevel int getopt _SetOutputFormat set _SetVerboseLevel PrintCategories _SetFilters _OutputFormat PrintUsage _SetCountingStyle split getreader ParseArguments ResetErrorCounts stderr exit verbose_level PrintErrorCounts StreamReaderWriter ProcessFile getwriter PY2 int time write flush load join index int rfind datetime split getctime year strip extract_datetime_from_line get_start_time total_seconds strip write get_log_created_year close extract_datetime_from_line open float get_log_created_year compile fix_initial_nan_learning_rate search group OrderedDict append float join basename write_csv print excel parse_log save_csv_files output_dir logfile_path NetParameter decay_mult format name lr_mult append print zip len get join str format convolution_param list setdefault param kernel_size map set top bottom append type module layer enumerate print_table filename summarize_net read_net | # DORN: Deep Ordinal Regression Network for Monocular Depth Estimation ### Paper #### [H. Fu, M. Gong, C. Wang, K. Batmanghelich and D. Tao: Deep Ordinal Regression Network for Monocular Depth Estimation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018.](https://arxiv.org/abs/1806.02446) ### Introduction The shared code is a Caffe implemention of our CVPR18 paper (DORN). The provided Caffe is not our internal one. But one can still use it for evaluation. We provide the pretrained models for KITTI and NYUV2 here (See Tab. 3 and Tab.4 in our paper). The code has been tested successfully on CentOS release 6.9, Cuda 9.0.176, Tesla V100, Anaconda python 2.7, Cudnn 7.0. Our method won the 1st prize in [Robust Vision Challange 2018](http://www.robustvision.net/index.php). We ranked 1st place on both [KITTI](http://www.cvlibs.net/datasets/kitti/eval_depth.php?benchmark=depth_prediction) and [ScanNet](http://dovahkiin.stanford.edu/adai/). Slides can be downloaded [here](https://drive.google.com/file/d/1d2b8rNk4Mxc5dBDrj8lOStKxGVwMXoq7/view?usp=sharing). KITTI  ScanNet  | 2,338 |
hughbzhang/HUSE | ['text generation'] | ['Unifying Human and Statistical Evaluation for Natural Language Generation'] | code/summarization_example.py code/full_example_utils.py code/full_summarization_example.py code/utils.py extract get_stats replicate_exps get_accs plot_replicates get_data_rand dump_file get_data_ctx extract_ctx get_data opt_thresh classify_examples print_matrix locate_model_failures main basics detailed_analysis get_data main get_stats KNNClassifier plot calculate_HUSE get_accuracy items list transpose fit append array split seed int get_accs get_data_rand concatenate transpose choice mean get_data linspace append array range subplots arange plot suptitle set_xlabel close tight_layout subplots_adjust set_ylabel set_xticks linspace floor legend savefig sorted mean isinf append keys append LeaveOneOut split fit mean sort argsort float sum len sorted defaultdict mean isinf append keys list choice mean isinf append keys append join dump_file array extract format plot print calculate_HUSE get_data basics detailed_analysis replicate_exps plot_replicates get_data_ctx extract_ctx classify_examples KNeighborsClassifier locate_model_failures format plot print calculate_HUSE get_data get_stats KNNClassifier transpose get_accuracy array append split fit arange subplots KNeighborsClassifier RdBu max append_axes make_axes_locatable set_title transpose set_xlabel shape scatter contourf array legend savefig meshgrid set_xlim tight_layout set_tick_params reshape min hist set_ylabel std set_ylim fit | ## HUSE — Human Unified with Statistical Evaluation  **Picture:** *HUSE is twice the classification error of distinguishing reference and generated text represented as (human judgment, pmodel) pairs. This easily identifies samples with defects in both quality (Sharon has stroke . . .) and diversity (Cleared coach facing ...).* This repository contains the ready to use evaluation methodology of the following paper: > **Unifying Human and Statistical Evaluation for Natural Language Generation**<br> > Tatsunori B. Hashimoto*, Hugh Zhang*, Percy Liang<br> > *Equal contribution > > **Abstract:** *How can we measure whether a natural language generation system produces both high quality and diverse outputs? Human evaluation captures quality but not diversity, as it does not catch models that simply plagiarize from the training set. On the other hand, statistical evaluation (i.e., perplexity) captures diversity but not quality, as models that occasionally emit low quality samples would be insufficiently penalized. In this paper, we propose a unified framework which evaluates both diversity and quality, based on the optimal error rate of predicting whether a sentence is human- or machine-generated. We demonstrate that this error rate can be efficiently estimated by combining human and statistical evaluation, using an evaluation metric which we call HUSE. On summarization and chit-chat dialogue, we show that HUSE detects diversity defects which fool pure human evaluation and that techniques such as annealing for improving quality actually decrease HUSE due to decreases in diversity.* | 2,339 |
hugochan/Eye-Tracker | ['gaze estimation'] | ['Eye Tracking for Everyone'] | itracker_adv.py validation_script.py itracker.py EyeTracker load_model train prepare_data test plot_loss shuffle_data validate_model load_data extract_validation_handles main normalize next_batch EyeTracker load_model train prepare_data test plot_loss shuffle_data validate_model load_data extract_validation_handles main normalize next_batch try_with_random_data load_model validate_model extract_validation_handles main load_validation_data load mean shape astype reshape normalize astype range arange shuffle len arange get_collection_ref print join restore import_meta_graph print float32 placeholder reduce_sum sqrt reduce_mean run squared_difference subplots arange plot set_xlabel tight_layout set_ylabel twinx savefig legend tick_params len EyeTracker print prepare_data default_timer save_loss plot_loss load_data input array input prepare_data load_data add_argument test ArgumentParser parse_args train next_batch int get_shape print exit len exit print load int reshape mean append sum range print rand any run print exit val_data | # Eye Tracker Implemented and improved the iTracker model proposed in the paper [Eye Tracking for Everyone](https://arxiv.org/abs/1606.05814).  *<center><h3>Figure 1: iTracker architecture</h3></center>*  *<center><h3>Figure 2: modified iTracker architecture</h3></center>* Figures 1 and 2 show the architectures of the iTracker model and the modified model. The only difference between the modified model and the iTracker model is that we concatenate the face layer FC-F1 and face mask layer FC-FG1 first, after applying a fully connected layer FC-F2, we then concatenate the eye layer FC-E1 and FC-F2 layer. | 2,340 |
huizhang0110/AON | ['optical character recognition', 'scene text recognition'] | ['AON: Towards Arbitrarily-Oriented Text Recognition'] | train.py demo.py data/dataset_util.py eval.py model_aon.py sync_attention_wrapper.py test.py label_map.py data/create_tfrecord.py data/standard_fields.py input_data.py main classfier repeated_run_evaluation test_get_batch_data evaluation get_batch_data read_tfrecord_use_queue_runner test_python_api read_tfrecord_use_pythonAPI test_queue_runner get_batch_data LabelMap test_label_map _max_pool _weight _bilstm _attention_based_decoder combined_static_and_dynamic_shape get_train_op _fc _bias _filter_gate test base_cnn _conv inference _arbitrary_orientation_network get_init_op SyncAttentionWrapper load_image test_single_picture main main make_tfrecord_from_tags recursive_parse_xml_to_dict int64_list_feature read_examples_list float_list_feature int64_feature bytes_feature bytes_list_feature TfExampleFields InputDataFields argmax base_cnn _fc start_queue_runners format should_stop minimize print sparse_softmax_cross_entropy group close placeholder Coordinator classfier reduce_mean run global_variables_initializer range Session local_variables_initializer get_batch_data join data_dir lower resize append imread show print imshow clf zip next get_batch_data batch_size reset_default_graph Session get_batch_data import_meta_graph run restore run_steps get_default_graph next range get_init_op format latest_checkpoint close print exp_dir get_tensor_by_name len time format latest_checkpoint print exp_dir show decode BytesIO seek print write Example ParseFromString tf_record_iterator open read_tfrecord_use_pythonAPI resize_images read TFRecordReader string_input_producer reshape cast int32 string parse_single_example shuffle_batch decode_jpeg show read_tfrecord_use_queue_runner start_queue_runners subplot should_stop print group close exit Coordinator imshow enumerate zip global_variables_initializer range Session local_variables_initializer run text_to_labels constant sequence_mask print reduce_max group float32 close tables_initializer cast global_variables_initializer labels_to_text LabelMap Session local_variables_initializer run as_list shape append enumerate xavier_initializer get_variable constant_initializer get_variable combined_static_and_dynamic_shape _filter_gate _arbitrary_orientation_network base_cnn _attention_based_decoder minimize AdadeltaOptimizer global_variables_initializer tables_initializer group local_variables_initializer inference constant random_normal imread resize import_meta_graph get_tensor_by_name restore format latest_checkpoint get_init_op print close exp_dir get_default_graph reset_default_graph Session run Saver save restore merge_all inference get_init_op latest_checkpoint astype FileWriter lower single_seq tfrecord_file_path max_steps join Variable graph get_train_op exp_dir add_summary get_tensor_by_name TFRecordWriter output_path format TFRecordWriter print close output_path append | # AON-tensorflow Tensorflow implementation of [AON: Towards Arbitrarily-Oriented Text Recognition](http://openaccess.thecvf.com/content_cvpr_2018/papers/Cheng_AON_Towards_Arbitrarily-Oriented_CVPR_2018_paper.pdf) that extracts feature sequences in four directions and combines them into an attention-based decoder to generate character sequence. | 2,341 |
hukim1112/weakalign | ['semantic correspondence'] | ['End-to-end weakly-supervised semantic alignment'] | util/train_test_fn.py data/synth_dataset.py data/caltech_dataset.py image/normalization.py data/download_datasets.py geotnf/point_tnf.py data/weak_dataset.py image/normalization_omniglot.py model/loss.py geotnf/flow.py test.py util/eval_util.py data/pascal_parts_dataset.py util/py_util.py model/cnn_geometric_model.py options/options.py demo (copy).py test_dataset.py util/torch_util.py geotnf/transformation.py demo.py train_strong.py eval.py data/tss_dataset.py train_weak.py data/pf_dataset.py util/dataloader.py affTpsTnf process_epoch inlier_score_function loss_fun process_epoch CaltechDataset download_caltech download_TSS download_and_uncompress download_pascal download_pascal_parts download_PF_pascal download_PF_willow PascalPartsDataset OmniglotDataset PFPascalDataset PFDataset SynthDataset TSSDataset ImagePairDataset th_sampling_grid_to_np_flow write_flo_file read_flo_file warp_image np_flow_to_th_sampling_grid unnormalize_axis PointTnf PointsToUnitCoords normalize_axis PointsToPixelCoords SynthPairTnf ComposedGeometricTnf SynthTwoStageTnf GeometricTnf AffineGridGen TpsGridGen SynthTwoStageTwoPairTnf SynthTwoPairTnf AffineGridGenV2 normalize_image NormalizeImageDict normalize_image NormalizeImageDict CNNGeometric FeatureExtraction FeatureRegression featureL2Norm FeatureCorrelation TwoStageCNNGeometric WeakInlierCount TransformedGridLoss TwoStageWeakInlierCount ArgumentParser DataLoaderIter default_collate DataLoader _worker_loop ExceptionWrapper pin_memory_batch _pin_memory_loop poly_str_to_mask pck_metric obj_ptr flow_metrics pck point_dist_metric inlier_count localization_error intersection_over_union mean_dist poly_to_mask area_metrics pascal_parts_metrics compute_metric label_transfer_accuracy create_file_path str_to_bool save_checkpoint BatchTensorToVars collate_custom Softmax1D expand_dim test_fun_strong train_fun_strong test_fun_weak print_train_progress train_fun_weak unsqueeze cat grid_sample GeometricTnf format backward model print step zero_grad capitalize loss_fn batch_preprocessing_fn enumerate len inliersComposed inliersAffine mean inlier_score_function model endswith print extractall write close dirname open ZipFile exists makedirs print join basename download_and_uncompress print join basename download_and_uncompress print join basename download_and_uncompress print join basename download_and_uncompress join basename download_and_uncompress print rename print join basename download_and_uncompress print close float32 int32 resize fromfile open tofile astype float32 close array open uint8 grid_sample Variable astype unsqueeze np_flow_to_th_sampling_grid list concatenate Variable unsqueeze meshgrid cuda range normalize_axis unnormalize_axis list concatenate squeeze meshgrid numpy range clone normalize_axis expand_as unnormalize_axis clone expand_as isinstance Variable size add expand div unsqueeze cuda is_cuda expand_as seed get set_num_threads put collate_fn get pin_memory_batch isinstance put is_tensor list sum isinstance Sequence new zip _new_shared Mapping Mapping is_tensor isinstance Sequence model max str list inlier_count range flatnonzero format size eval batch_tnf keys enumerate minimum int isinstance print category zeros metric_fun flow_output_dir len ne float size mean pow expand_as zeros le sum range ne size mean pow div expand_as zeros sum range data PointTnf list PointsToUnitCoords size numpy affPointTnf range mean_dist tpsPointTnf PointsToPixelCoords list inliersComposed size TwoStageWeakInlierCount numpy range data PointTnf list PointsToUnitCoords size pck pck_alpha numpy affPointTnf range tpsPointTnf PointsToPixelCoords int theta_to_sampling_grid grid_sample pck_metric Variable size transpose unsqueeze intersection_over_union numpy cuda range localization_error unsqueeze linspace cuda poly_str_to_mask view intersection_over_union affPointTnf meshgrid range cat label_transfer_accuracy th_sampling_grid_to_np_flow grid_sample size int PointTnf pointsToGrid Variable numpy tpsPointTnf unsqueeze linspace cuda view write_flo_file create_file_path affPointTnf meshgrid range cat th_sampling_grid_to_np_flow size int PointTnf join pointsToGrid Variable numpy flow_output_dir tpsPointTnf polygon zeros Variable fromstring unsqueeze poly_to_mask cuda mul float sum int list obj_ptr astype where mean meshgrid abs range list min where meshgrid max range dirname makedirs is_tensor Mapping isinstance exp unsqueeze join basename copyfile dirname save makedirs size list format model pair_generation_tnf backward print zero_grad loss_fn train step enumerate len format model pair_generation_tnf print eval loss_fn enumerate model zero_grad print_train_progress FeatureCorrelation iter next sum FeatureExtraction format batch_tnf TpsGridRegularityLoss enumerate FeatureRegression backward print loss_fn train step tgrl len format model print next eval batch_tnf iter loss_fn sum enumerate print format | # End-to-end weakly-supervised semantic alignment  ## About This is the implementation of the paper "End-to-end weakly-supervised semantic alignment" by I. Rocco, R. Arandjelović and J. Sivic. For more information check out the project [[website](http://www.di.ens.fr/willow/research/weakalign/)] and the paper on [[arXiv](https://arxiv.org/abs/1712.06861)]. ## Getting started ### Dependencies The code is implemented using Python 3 and PyTorch 0.2. All dependencies are included in the standard Anaconda distribution. ### Training The code includes scripts for pre-training the models with strong supervision (`train_strong.py`) as proposed in [our previous work](http://www.di.ens.fr/willow/research/cnngeometric/), as well as to fine-tune the model using weak supervision (`train_weak.py`) as proposed in this work. | 2,342 |
human-aimachine-art/pytorch-STROTSS-improved | ['style transfer'] | ['Style Transfer by Relaxed Optimal Transport and Self-Similarity', 'Interactive Neural Style Transfer with Artists'] | loss_utils.py test.py vgg_pt.py utils.py style_transfer.py pairwise_distances_sq_l2 sinkhorn_logsumexp compute_dp_loss compute_modified_cost_matrix pairwise_distances_cos compute_moment_loss RelaxedOptimalTransportSelfSimilarityLoss compute_distance_matrix compute_remd_loss run_style_transfer style_transfer load_img downsample rgb_to_yuv synthetize_image_from_laplacian_pyramid resize create_laplacian_pyramid load_style_features extract_regions Vgg16_pt t mm view norm pairwise_distances_sq_l2 sqrt pairwise_distances_cos to range len sinkhorn_logsumexp size min transpose mean compute_distance_matrix max size transpose mean mm range len exp size transpose mean sum compute_distance_matrix detach sum exp fill_ size logsumexp compute_modified_cost_matrix t is_available range log shuffle_feature_inds zero_grad numpy synthetize_image_from_laplacian_pyramid RelaxedOptimalTransportSelfSimilarityLoss RMSprop append to range astype eval load_style_features enumerate backward print cnn init_inds create_laplacian_pyramid step time format imwrite print size transpose clone downsample item resize to range style_transfer convert to_tensor isinstance append range downsample resize size len range resize reshape transpose astype float32 unique append to numpy range | # PyTorch implementation of Style Transfer by Relaxed Optimal Transport and Self-Similarity (STROTSS) with improvements Implements [STROTSS](https://arxiv.org/abs/1904.12785) with sinkhorn EMD as introduced in the paper [Interactive Neural Style Transfer with artists](https://arxiv.org/pdf/2003.06659). This code is inspired by [the original implementation](https://github.com/nkolkin13/STROTSS) released by the authors of STROTSS. ## Dependencies: * python3 >= 3.6 * pytorch >= 1.0 * torchvision >= 0.4 * imageio >= 2.2 * numpy >= 1.1 ## Usage: | 2,343 |
hustvl/BMaskR-CNN | ['instance segmentation', 'semantic segmentation'] | ['Boundary-preserving Mask R-CNN'] | projects/DensePose/densepose/densepose_coco_evaluation.py tests/test_checkpoint.py detectron2/modeling/proposal_generator/build.py detectron2/evaluation/cityscapes_evaluation.py detectron2/utils/visualizer.py detectron2/utils/collect_env.py projects/DensePose/densepose/data/combined_loader.py projects/BMaskR-CNN/train_net.py tests/modeling/test_fast_rcnn.py tools/visualize_data.py projects/PointRend/point_rend/semantic_seg.py projects/TensorMask/tensormask/arch.py detectron2/modeling/anchor_generator.py detectron2/data/detection_utils.py projects/DensePose/densepose/data/datasets/dataset_type.py projects/TridentNet/tridentnet/trident_rcnn.py detectron2/modeling/proposal_generator/rpn.py detectron2/checkpoint/__init__.py projects/DensePose/densepose/data/transform/__init__.py projects/DensePose/densepose/data/video/video_keyframe_dataset.py projects/TensorMask/tests/__init__.py detectron2/modeling/postprocessing.py detectron2/config/compat.py detectron2/utils/analysis.py projects/TensorMask/tensormask/layers/swap_align2nat.py projects/DensePose/train_net.py detectron2/model_zoo/__init__.py detectron2/evaluation/sem_seg_evaluation.py detectron2/evaluation/lvis_evaluation.py detectron2/utils/registry.py detectron2/modeling/backbone/backbone.py detectron2/modeling/roi_heads/fast_rcnn.py detectron2/data/datasets/lvis.py projects/TensorMask/tensormask/layers/__init__.py projects/DensePose/densepose/modeling/test_time_augmentation.py projects/TensorMask/tests/test_swap_align2nat.py detectron2/modeling/roi_heads/mask_head.py detectron2/data/samplers/__init__.py projects/DensePose/densepose/evaluator.py detectron2/utils/logger.py detectron2/structures/__init__.py detectron2/modeling/meta_arch/semantic_seg.py projects/DensePose/tests/common.py tests/data/test_sampler.py tests/modeling/test_roi_pooler.py detectron2/modeling/meta_arch/retinanet.py detectron2/export/caffe2_export.py detectron2/export/shared.py detectron2/data/samplers/grouped_batch_sampler.py detectron2/layers/roi_align.py setup.py detectron2/modeling/roi_heads/rotated_fast_rcnn.py detectron2/utils/events.py detectron2/layers/roi_align_rotated.py projects/DensePose/densepose/data/structures.py detectron2/utils/video_visualizer.py tests/structures/test_rotated_boxes.py detectron2/data/samplers/distributed_sampler.py projects/DensePose/tests/test_model_e2e.py detectron2/modeling/sampling.py projects/PointRend/point_rend/color_augmentation.py detectron2/utils/__init__.py projects/DensePose/apply_net.py detectron2/structures/image_list.py projects/PointRend/point_rend/config.py tests/test_model_analysis.py tools/analyze_model.py detectron2/evaluation/panoptic_evaluation.py tests/modeling/test_anchor_generator.py detectron2/layers/__init__.py detectron2/modeling/roi_heads/roi_heads.py detectron2/modeling/backbone/__init__.py docs/conf.py projects/PointRend/point_rend/point_features.py tools/visualize_json_results.py detectron2/utils/comm.py projects/BMaskR-CNN/bmaskrcnn/mask_head.py detectron2/evaluation/testing.py detectron2/structures/instances.py detectron2/data/__init__.py detectron2/modeling/box_regression.py detectron2/solver/lr_scheduler.py projects/TridentNet/tridentnet/trident_rpn.py tests/layers/test_nms_rotated.py tools/train_net.py detectron2/data/datasets/pascal_voc.py detectron2/export/caffe2_modeling.py detectron2/data/catalog.py projects/DensePose/densepose/data/video/__init__.py tests/structures/test_imagelist.py detectron2/config/__init__.py detectron2/modeling/test_time_augmentation.py projects/DensePose/densepose/vis/extractor.py projects/TensorMask/tensormask/config.py detectron2/modeling/roi_heads/box_head.py tests/test_export_caffe2.py detectron2/modeling/meta_arch/__init__.py detectron2/export/c10.py detectron2/export/__init__.py projects/DensePose/densepose/data/transform/image.py projects/DensePose/densepose/utils/dbhelper.py demo/demo.py detectron2/solver/build.py tests/modeling/test_matcher.py detectron2/export/patcher.py tools/benchmark.py projects/PointRend/point_rend/__init__.py projects/DensePose/densepose/data/inference_based_loader.py detectron2/data/transforms/__init__.py detectron2/config/defaults.py detectron2/export/caffe2_inference.py tests/data/test_rotation_transform.py projects/DensePose/densepose/densepose_head.py detectron2/modeling/meta_arch/rcnn.py projects/DensePose/densepose/config.py detectron2/checkpoint/catalog.py detectron2/evaluation/fast_eval_api.py detectron2/layers/batch_norm.py detectron2/data/datasets/coco.py projects/DensePose/densepose/data/__init__.py tests/data/test_detection_utils.py detectron2/evaluation/__init__.py projects/DensePose/tests/test_structures.py projects/DensePose/densepose/data/datasets/chimpnsee.py projects/TensorMask/train_net.py projects/DensePose/densepose/roi_head.py projects/DensePose/densepose/utils/transform.py detectron2/checkpoint/detection_checkpoint.py projects/TridentNet/tridentnet/config.py detectron2/data/transforms/augmentation.py projects/DensePose/densepose/data/datasets/coco.py projects/TridentNet/tridentnet/trident_conv.py detectron2/export/api.py projects/PointRend/point_rend/dataset_mapper.py detectron2/structures/keypoints.py detectron2/engine/__init__.py detectron2/utils/serialize.py detectron2/config/config.py detectron2/layers/blocks.py projects/DensePose/densepose/vis/bounding_box.py detectron2/engine/train_loop.py detectron2/modeling/matcher.py projects/DensePose/densepose/vis/base.py detectron2/data/dataset_mapper.py detectron2/data/datasets/cityscapes.py detectron2/modeling/proposal_generator/proposal_utils.py projects/DensePose/densepose/engine/trainer.py detectron2/data/datasets/builtin.py tests/data/test_coco_evaluation.py tools/convert-torchvision-to-d2.py detectron2/modeling/roi_heads/cascade_rcnn.py projects/DensePose/tests/test_frame_selector.py projects/TridentNet/tridentnet/__init__.py projects/BMaskR-CNN/bmaskrcnn/__init__.py detectron2/data/transforms/transform.py detectron2/modeling/proposal_generator/rrpn.py detectron2/engine/hooks.py detectron2/layers/mask_ops.py projects/BMaskR-CNN/bmaskrcnn/config.py projects/PointRend/train_net.py projects/DensePose/densepose/data/datasets/__init__.py projects/TridentNet/tridentnet/trident_backbone.py tests/layers/test_mask_ops.py demo/predictor.py detectron2/data/datasets/__init__.py detectron2/data/datasets/builtin_meta.py detectron2/modeling/roi_heads/keypoint_head.py detectron2/model_zoo/model_zoo.py detectron2/data/build.py detectron2/evaluation/rotated_coco_evaluation.py projects/DensePose/densepose/data/utils.py tests/modeling/test_rpn.py tests/layers/test_roi_align_rotated.py projects/DensePose/densepose/data/video/frame_selector.py detectron2/modeling/roi_heads/__init__.py projects/BMaskR-CNN/bmaskrcnn/roi_heads.py detectron2/data/datasets/lvis_v1_categories.py detectron2/modeling/proposal_generator/__init__.py projects/DensePose/densepose/data/dataset_mapper.py projects/TensorMask/setup.py tests/structures/test_instances.py projects/PointRend/point_rend/point_head.py tests/structures/test_boxes.py tests/test_model_zoo.py detectron2/checkpoint/c2_model_loading.py projects/DensePose/tests/test_combine_data_loader.py projects/PointRend/point_rend/roi_heads.py tools/deploy/caffe2_converter.py detectron2/__init__.py detectron2/data/datasets/register_coco.py projects/DensePose/tests/test_image_resize_transform.py detectron2/layers/nms.py detectron2/modeling/meta_arch/build.py projects/DensePose/tests/test_video_keyframe_dataset.py tests/layers/test_roi_align.py tests/modeling/test_box2box_transform.py tests/modeling/test_roi_heads.py detectron2/structures/masks.py projects/DensePose/densepose/data/datasets/builtin.py projects/DensePose/densepose/utils/logger.py detectron2/solver/__init__.py tests/data/test_coco.py detectron2/evaluation/coco_evaluation.py detectron2/layers/wrappers.py projects/DensePose/densepose/__init__.py detectron2/engine/defaults.py projects/DensePose/query_db.py projects/DensePose/densepose/data/build.py detectron2/utils/memory.py tests/test_visualizer.py detectron2/layers/deform_conv.py projects/DensePose/densepose/engine/__init__.py tools/plain_train_net.py tests/modeling/test_model_e2e.py detectron2/structures/rotated_boxes.py projects/DensePose/densepose/vis/densepose.py projects/TridentNet/train_net.py detectron2/modeling/backbone/resnet.py detectron2/layers/rotated_boxes.py detectron2/modeling/backbone/fpn.py projects/BMaskR-CNN/bmaskrcnn/cascade_rcnn.py dev/packaging/gen_install_table.py detectron2/utils/colormap.py detectron2/evaluation/pascal_voc_evaluation.py projects/DensePose/tests/test_setup.py projects/PointRend/point_rend/coarse_mask_head.py detectron2/modeling/backbone/build.py detectron2/modeling/__init__.py tests/data/test_transforms.py detectron2/modeling/meta_arch/panoptic_fpn.py detectron2/utils/env.py projects/TensorMask/tensormask/__init__.py tests/test_config.py detectron2/engine/launch.py detectron2/data/transforms/augmentation_impl.py detectron2/layers/shape_spec.py detectron2/data/datasets/lvis_v0_5_categories.py detectron2/modeling/poolers.py tests/__init__.py detectron2/evaluation/evaluator.py detectron2/structures/boxes.py detectron2/data/common.py get_model_zoo_configs get_extensions get_version get_parser setup_cfg VisualizationDemo AsyncPredictor convert_c2_detectron_names convert_basic_c2_names align_and_update_state_dicts Detectron2Handler ModelCatalogHandler ModelCatalog DetectionCheckpointer upgrade_config downgrade_config _rename _RenameConverter ConverterV2 ConverterV1 guess_version set_global_cfg CfgNode get_cfg _get_args_from_config configurable _called_with_cfg print_instances_class_histogram build_detection_test_loader filter_images_with_few_keypoints build_batch_data_loader build_detection_train_loader load_proposals_into_dataset worker_init_reset_seed get_detection_dataset_dicts trivial_batch_collator filter_images_with_only_crowd_annotations MetadataCatalog DatasetCatalog Metadata AspectRatioGroupedDataset DatasetFromList MapDataset DatasetMapper check_metadata_consistency annotations_to_instances build_augmentation create_keypoint_hflip_indices transform_instance_annotations _apply_exif_orientation annotations_to_instances_rotated transform_proposals gen_crop_transform_with_instance check_image_size convert_PIL_to_numpy convert_image_to_rgb SizeMismatchError filter_empty_instances transform_keypoint_annotations read_image register_all_coco register_all_cityscapes register_all_lvis register_all_pascal_voc _get_coco_instances_meta _get_builtin_metadata _get_coco_panoptic_separated_meta get_cityscapes_files cityscapes_files_to_dict load_cityscapes_semantic load_cityscapes_instances load_sem_seg convert_to_coco_json load_coco_json convert_to_coco_dict register_lvis_instances _get_lvis_instances_meta_v1 _get_lvis_instances_meta_v0_5 get_lvis_instances_meta load_lvis_json load_voc_instances register_pascal_voc merge_to_panoptic register_coco_panoptic_separated register_coco_instances RepeatFactorTrainingSampler InferenceSampler TrainingSampler GroupedBatchSampler apply_augmentations AugInput Augmentation _check_img_dtype StandardAugInput RandomFlip RandomSaturation RandomLighting RandomRotation RandomApply RandomContrast Resize RandomCrop ResizeShortestEdge RandomExtent RandomBrightness ResizeTransform HFlip_rotated_box RotationTransform ExtentTransform Resize_rotated_box default_setup DefaultTrainer DefaultPredictor default_argument_parser PeriodicCheckpointer LRScheduler IterationTimer PreciseBN CallbackHook PeriodicWriter AutogradProfiler EvalHook launch _distributed_worker _find_free_port SimpleTrainer HookBase TrainerBase CityscapesEvaluator CityscapesSemSegEvaluator CityscapesInstanceEvaluator instances_to_coco_json COCOEvaluator _evaluate_box_proposals _evaluate_predictions_on_coco DatasetEvaluator DatasetEvaluators inference_on_dataset inference_context COCOeval_opt _evaluate_predictions_on_lvis _evaluate_box_proposals LVISEvaluator COCOPanopticEvaluator _print_panoptic_results PascalVOCDetectionEvaluator parse_rec voc_eval voc_ap RotatedCOCOeval RotatedCOCOEvaluator SemSegEvaluator verify_results print_csv_format flatten_results_dict export_onnx_model add_export_config Caffe2Model export_caffe2_model Caffe2Tracer Caffe2KeypointRCNNInference Caffe2Boxes Caffe2RPN Caffe2Compatible Caffe2FastRCNNOutputsInference Caffe2MaskRCNNInference Caffe2ROIPooler InstancesList run_and_save_graph export_onnx_model _op_stats export_caffe2_detection_model _assign_device_option ProtobufDetectionModel ProtobufModel Caffe2GeneralizedRCNN Caffe2RetinaNet set_caffe2_compatible_tensor_mode _cast_to_f32 Caffe2PanopticFPN Caffe2MetaArch assemble_rcnn_outputs_by_name convert_batched_inputs_to_c2_format GenericMixin mock_fastrcnn_outputs_inference ROIHeadsPatcher patch Caffe2CompatibleConverter patch_generalized_rcnn mock_mask_rcnn_inference mock_keypoint_rcnn_inference rename_op_input get_sub_graph_external_input_output to_device IllegalGraphTransformError mock_torch_nn_functional_interpolate save_graph_base identify_reshape_sub_graph alias remove_reshape_for_fc construct_init_net_from_params DiGraph get_params_from_init_net get_pb_arg_valstrings _modify_blob_names _rename_blob group_norm_replace_aten_with_caffe2 _rename_versioned_blob_in_proto get_pb_arg_ints ScopedWS get_pb_arg_vals save_graph get_pb_arg_floats fuse_copy_between_cpu_and_gpu get_pb_arg_vali get_pb_arg_valf _generic_status_identifier onnx_compatibale_interpolate remove_dead_end_ops get_pb_arg BilinearInterpolation check_set_pb_arg infer_device_type fetch_any_blob get_producer_map fuse_alias_placeholder rename_op_output _get_dependency_chain _updater_raise _create_const_fill_op_from_numpy create_const_fill_op get_consumer_map _create_const_fill_op_from_c2_int8_tensor NaiveSyncBatchNorm FrozenBatchNorm2d AllReduce get_norm CNNBlockBase ModulatedDeformConv DeformConv _ModulatedDeformConv _DeformConv paste_mask_in_image_old pad_masks _do_paste_mask paste_masks_in_image scale_boxes nms_rotated batched_nms_rotated batched_nms ROIAlign _ROIAlign _ROIAlignRotated ROIAlignRotated pairwise_iou_rotated ShapeSpec _NewEmptyTensorOp nonzero_tuple Conv2d interpolate cat RotatedAnchorGenerator _create_grid_offsets build_anchor_generator _broadcast_params BufferList DefaultAnchorGenerator Box2BoxTransformRotated Box2BoxTransform Matcher convert_boxes_to_pooler_format assign_boxes_to_levels ROIPooler detector_postprocess sem_seg_postprocess subsample_labels DatasetMapperTTA GeneralizedRCNNWithTTA Backbone build_backbone LastLevelMaxPool build_retinanet_resnet_fpn_backbone build_resnet_fpn_backbone FPN LastLevelP6P7 _assert_strides_are_log2_contiguous ResNet BottleneckBlock build_resnet_backbone DeformBottleneckBlock make_stage BasicBlock BasicStem build_model combine_semantic_and_instance_outputs PanopticFPN ProposalNetwork GeneralizedRCNN permute_to_N_HWA_K RetinaNet RetinaNetHead SemanticSegmentor SemSegFPNHead build_sem_seg_head build_proposal_generator add_ground_truth_to_proposals find_top_rpn_proposals add_ground_truth_to_proposals_single_image RPN StandardRPNHead build_rpn_head RRPN find_top_rrpn_proposals build_box_head FastRCNNConvFCHead _ScaleGradient CascadeROIHeads FastRCNNOutputs fast_rcnn_inference fast_rcnn_inference_single_image FastRCNNOutputLayers build_keypoint_head keypoint_rcnn_inference KRCNNConvDeconvUpsampleHead keypoint_rcnn_loss BaseKeypointRCNNHead mask_rcnn_inference MaskRCNNConvUpsampleHead build_mask_head mask_rcnn_loss BaseMaskRCNNHead Res5ROIHeads build_roi_heads ROIHeads select_proposals_with_visible_keypoints select_foreground_proposals StandardROIHeads RROIHeads RotatedFastRCNNOutputLayers fast_rcnn_inference_single_image_rotated fast_rcnn_inference_rotated get get_checkpoint_url get_config_file _ModelZooUrls build_lr_scheduler _generate_optimizer_class_with_gradient_clipping build_optimizer _create_gradient_clipper maybe_add_gradient_clipping GradientClipType WarmupMultiStepLR WarmupCosineLR _get_warmup_factor_at_iter pairwise_iou matched_boxlist_iou BoxMode Boxes ImageList Instances Keypoints _keypoints_to_heatmap heatmaps_to_keypoints polygon_area BitMasks polygons_to_bitmask rasterize_polygons_within_box PolygonMasks pairwise_iou RotatedBoxes _flatten_to_tuple _wrapper_count_operators flop_count_operators activation_count_operators detect_compute_compatibility get_env_module collect_torch_env collect_env_info random_color colormap get_local_size synchronize get_world_size get_local_rank reduce_dict _get_global_gloo_group shared_random_seed all_gather get_rank gather _serialize_to_tensor is_main_process _pad_to_largest_tensor setup_environment setup_custom_environment _configure_libraries _import_file seed_all_rng EventStorage TensorboardXWriter EventWriter get_event_storage JSONWriter CommonMetricPrinter _cached_log_stream log_first_n _find_caller setup_logger log_every_n create_small_table _ColorfulFormatter log_every_n_seconds retry_if_cuda_oom _ignore_torch_cuda_oom PicklableWrapper _DetectedInstance VideoVisualizer VisImage Visualizer GenericMask ColorMode _PanopticPrediction _create_text_labels gen_header setup autodoc_skip_member paper_ref_role GithubURLDomain main setup Trainer CascadeBoundaryROIHeads add_boundary_preserving_config BoundaryPreservingHead boundary_preserving_mask_loss boundary_loss_func dice_loss_func BoundaryROIHeads ShowAction create_argument_parser register_action InferenceAction main DumpAction Action setup_dataset ShowAction create_argument_parser register_action EntrywiseAction main PrintAction Action main setup add_dataset_category_config add_densepose_config Params DensePoseDataMode DensePoseCocoEval DensePoseEvalMode IIDIsotropicGaussianUVLoss DensePoseUVConfidenceType DensePoseConfidenceModelConfig DensePoseLosses densepose_inference ASPPPooling build_densepose_predictor _NonLocalBlockND DataForMaskLoss ASPP DensePoseV1ConvXHead _linear_interpolation_utilities IndepAnisotropicGaussianUVLoss DensePoseDataFilter NONLocalBlock2D _resample_data DensePoseUVConfidenceConfig build_densepose_head DensePosePredictor _extract_single_tensors_from_matches _extract_single_tensors_from_matches_one_image build_densepose_losses DensePoseDeepLabHead initialize_module_params ASPPConv _grid_sampling_utilities DensePoseSegmConfidenceConfig build_densepose_data_filter _extract_at_points_packed _extract_data_for_mask_loss_from_matches DensePoseCOCOEvaluator _evaluate_predictions_on_coco_gpsm prediction_to_json _evaluate_predictions_on_coco_gps _evaluate_predictions_on_coco Decoder DensePoseROIHeads _maybe_create_keypoints_keep_instance_predicate build_transform _get_train_keep_instance_predicate _add_category_whitelists_to_metadata build_frame_selector _maybe_filter_and_map_categories _add_category_maps_to_metadata build_detection_train_loader build_combined_loader _compute_num_images_per_worker _add_category_id_to_contiguous_id_maps_to_metadata _maybe_create_mask_keep_instance_predicate combine_detection_dataset_dicts _maybe_create_specific_keep_instance_predicate _get_test_keep_instance_predicate _maybe_create_general_keep_instance_predicate build_detection_test_loader _map_category_id_to_contiguous_id _maybe_create_densepose_keep_instance_predicate _pooled_next CombinedDataLoader DatasetMapper build_augmentation _grouper InferenceBasedLoader DensePoseResult DensePoseList normalized_coords_transform DensePoseTransformData DensePoseDataRelative DensePoseOutput is_relative_local_path maybe_prepend_base_path register_dataset _maybe_add_segm _maybe_add_bbox _combine_images_with_annotations load_coco_json register_dataset _verify_annotations_have_unique_ids register_datasets create_video_frame_mapping _maybe_add_densepose _load_coco_annotations get_metadata _maybe_add_keypoints CocoDatasetInfo _add_categories_metadata DatasetType ImageResizeTransform LastKFramesSelector FirstKFramesSelector RandomKFramesSelector FrameSelectionStrategy video_list_from_file read_keyframes VideoKeyframeDataset list_keyframes Trainer DensePoseGeneralizedRCNNWithTTA rotate_box_inverse _inverse_rotation DensePoseDatasetMapperTTA AllEntrySelector EntrySelector FieldEntrySelector verbosity_to_level load_for_dataset load_from_cfg PointsVisualizer CompoundVisualizer RectangleVisualizer TextVisualizer MatrixVisualizer BoundingBoxVisualizer ScoredBoundingBoxVisualizer _extract_v_from_iuvarr DensePoseDataPointsVisualizer DensePoseDataPointsIVisualizer DensePoseDataPointsVVisualizer DensePoseResultsMplContourVisualizer DensePoseDataCoarseSegmentationVisualizer DensePoseResultsVVisualizer DensePoseResultsFineSegmentationVisualizer _densepose_data_i_for_cmap DensePoseOutputsFineSegmentationVisualizer _extract_u_from_iuvarr DensePoseOutputsVVisualizer DensePoseOutputsUVisualizer _densepose_data_v_for_cmap _densepose_data_u_for_cmap DensePoseDataPointsUVisualizer DensePoseResultsVisualizer DensePoseMaskedColormapResultsVisualizer DensePoseResultsCustomContourVisualizer _extract_i_from_iuvarr DensePoseResultsUVisualizer extract_boxes_xywh_from_instances BoundingBoxExtractor ScoreThresholdedExtractor CompoundExtractor ScoredBoundingBoxExtractor DensePoseResultExtractor extract_scores_from_instances NmsFilteredExtractor create_extractor get_quick_schedules_config_files _get_base_config_dir setup _collect_config_files get_config_files get_evolution_config_files _get_quick_schedules_config_dir get_model _get_model_config _get_evolution_config_dir _grouper TestCombinedDataLoader TestFrameSelector TestImageResizeTransform make_empty_instances DensePoseRCNNE2ETest ModelE2ETest make_model_inputs TestSetup TestStructures temp_video _create_video_frames TestVideoKeyframeDataset main setup Trainer CoarseMaskHead ColorAugSSDTransform add_pointrend_config SemSegDatasetMapper crop_transform get_uncertain_point_coords_on_grid point_sample get_point_coords_wrt_image point_sample_fine_grained_features generate_regular_grid_point_coords get_uncertain_point_coords_with_randomness StandardPointHead roi_mask_point_loss build_point_head PointRendROIHeads calculate_uncertainty PointRendSemSegHead calculate_uncertainty get_extensions main setup Trainer permute_all_cls_and_box_to_N_HWA_K_and_concat TensorMask TensorMaskAnchorGenerator _postprocess _paste_mask_lists_in_image _assignment_rule TensorMaskHead add_tensormask_config SwapAlign2Nat _SwapAlign2Nat SwapAlign2NatTest main setup Trainer add_tridentnet_config build_trident_resnet_backbone TridentBottleneckBlock make_trident_stage TridentConv merge_branch_instances TridentStandardROIHeads TridentRes5ROIHeads TridentRPN TestCheckpointer _LegacySubClassNotCfg TestConfigVersioning TestConfigurable _LegacySubClass _TestClassD _TestClassA _TestClassB _NewSubClassNewInit _TestClassC TestCaffe2Export FasterRCNNTest get_model_zoo RetinaNetTest TestModelZoo TestVisualizer make_dataset_dicts make_mask uncompressed_rle TestRLEToJson TestCOCOeval TestTransformAnnotations TestRotationTransform TestGroupedBatchSampler TestTransforms rasterize_polygons_with_grid_sample TestMaskCropPaste benchmark_paste iou_between_full_image_bit_masks TestNMSRotated nms_edit_distance ROIAlignTest benchmark_roi_align ROIAlignRotatedTest TestAnchorGenerator random_rotated_boxes TestBox2BoxTransformRotated random_boxes TestBox2BoxTransform FastRCNNTest TestMatcher create_model_input get_empty_instance RetinaNetE2ETest ModelE2ETest MaskRCNNE2ETest get_regular_bitmask_instances get_model_zoo ROIHeadsTest TestROIPooler RPNTest TestBoxes TestBoxIOU TestBoxMode TestImageList TestInstancesIndexing TestRotatedBoxesLayer TestRotatedBoxesStructure benchmark_rotated_iou do_flop setup do_parameter do_structure do_activation benchmark_eval benchmark_data setup benchmark_train setup get_evaluator do_train main do_test main setup Trainer parse_args output setup create_instances dataset_id_map setup_cfg join format strip readlines strftime getenv dirname abspath append get join format hipify glob copy dirname abspath append join isdir glob unlink realpath rmtree islink symlink dirname exists merge_from_file confidence_threshold config_file get_cfg merge_from_list opts freeze add_argument ArgumentParser deepcopy deepcopy sorted format convert_basic_c2_names zip getLogger tuple shape startswith info keys cat get_missing_parameters_message getLogger tuple warning max values sorted list view tolist shape convert_c2_detectron_names format info keys enumerate error clone get_unexpected_parameters_message len VERSION upgrade clone range VERSION downgrade clone range VERSION format warning getLogger _get _del _set split clear update startswith pop update list set any signature keys from_config_func pop isinstance getLogger format info len getLogger format info len pop format getLogger set info list tabulate arange format chain min log_first_n extend zip_longest colored zeros sum INFO len list check_metadata_consistency thing_classes print_instances_class_histogram filter_images_with_few_keypoints from_iterable zip filter_images_with_only_crowd_annotations DataLoader BatchSampler get_world_size RepeatFactorTrainingSampler format TrainingSampler getLogger TRAIN MapDataset SAMPLER_TRAIN REPEAT_THRESHOLD get_detection_dataset_dicts info DatasetMapper DatasetFromList repeat_factors_from_category_frequency len DataLoader MapDataset BatchSampler get_detection_dataset_dicts DatasetMapper DatasetFromList InferenceSampler len randint seed_all_rng T asarray convert dot expand_dims uint8 asarray T isinstance convert astype dot Tensor numpy get getexif pop apply_box convert astype Boxes Instances nonempty XYXY_ABS as_tensor clip minimum decode apply_segmentation isinstance convert XYXY_ABS transform_keypoint_annotations clip apply_coords reshape all decode ndarray Keypoints isinstance BitMasks Boxes Instances stack polygons_to_bitmask append tensor PolygonMasks Instances tensor clip RotatedBoxes append nonempty get update check_metadata_consistency keypoint_flip_map dict keypoint_names minimum asarray convert astype maximum randint int32 XYXY_ABS str format getLogger error enumerate RandomFlip MIN_SIZE_TRAIN_SAMPLING MIN_SIZE_TEST MIN_SIZE_TRAIN ResizeShortestEdge MAX_SIZE_TRAIN MAX_SIZE_TEST append get join list items register_coco_instances _get_builtin_metadata register_coco_panoptic_separated register_lvis_instances list join items get_lvis_instances_meta join list format items set _get_builtin_metadata register join register_pascal_voc update _get_coco_instances_meta ls join append info format partial map get_cityscapes_files info Pool len append get_local_path get_cityscapes_files replace list asarray isinstance Polygon endswith bounds geoms id buffer difference is_empty nonzero unique append XYXY_ABS union chain warning Timer getCatIds sorted list append frPyObjects get format XYWH_ABS zip loadImgs info seconds keys enumerate join get_local_path isinstance loadCats len list sorted format zip warn set info append len get decode hasattr XYWH_ABS isinstance convert len reverse_id_mapper item info append XYXY_ABS float sum PolygonMasks enumerate mkdirs dirname register set Timer load_imgs get_local_path format seconds sorted list zip get get_file_name set get_lvis_instances_meta info LVIS keys append len sorted sorted join get_local_path findall text append find register set register set register set update append copy StandardAugInput ndarray isinstance width arctan2 cos h pi w new_w sin new_h add_argument hash ArgumentParser str read format collect_env_info join config_file CUDNN_BENCHMARK setup_logger get_world_size get_rank mkdirs info OUTPUT_DIR seed_all_rng socket bind close AF_INET SOCK_STREAM spawn getLogger warning _find_free_port main_func list init_process_group new_group synchronize set_device main_func range decode XYWH_ABS has pred_keypoints convert tolist append XYXY_ABS numpy range len arange zeros_like max pairwise_iou Boxes append sum loadAnns range getAnnIds mean float proposal_boxes enumerate reshape sort min zeros as_tensor len pop deepcopy evaluate COCOeval summarize accumulate loadRes kpt_oks_sigmas array len str format evaluate getLogger min get_world_size perf_counter timedelta reset info DatasetEvaluators len eval train training get_ann_ids load_anns pop deepcopy format getLogger LVISResults warn get_results create_small_table print_results info LVISEval run append tabulate info int findall text append find arange concatenate size maximum sum max range parse_rec cumsum argmax max sum range eps format astype float minimum reshape maximum voc_ap argsort zeros bool array len join list format items getLogger info str getLogger error exit EXPECTED_RESULTS pformat info abs items list isinstance freeze defrost is_frozen CN property get_available_passes optimize apply get items sorted list infer_device_type _assign_op_device_option get_ssa remove_reshape_for_fc construct_init_net_from_params export_onnx_model onnx_graph_to_caffe2_net group_norm_replace_aten_with_caffe2 format fuse_copy_between_cpu_and_gpu encode_additional_info _op_stats colored info remove_dead_end_ops __name__ deepcopy fuse_alias_placeholder any get_params_from_init_net tabulate _assign_device_option format save_graph info get arange keypoint_rcnn_inference pred_classes Boxes RotatedBoxes int64 zeros to apply get from_tensors image_sizes zip append Tensor named_children isinstance ccc RPN patch ROIPooler device int upsample_filt zeros conv_transpose2d warning isinstance is_in_onnx_export FetchBlob arg get_pb_arg get_pb_arg get_pb_arg get_pb_arg get_pb_arg get_pb_arg format extend MakeArgument warning getattr setattr get_pb_arg update data type NetDef list format items isinstance extend warning append type len range enumerate defaultdict append range enumerate len get_ssa get_producer_map deepcopy get_ssa _update_i op reversed zip union deepcopy list _replace_list map output input append partial GetPydotGraph format print write_svg op GetPydotGraphMinimal dirname _modify_blob_names write_png write_pdf makedirs remove format check_set_pb_arg get_pb_arg_vals op get_pb_arg_vali info get_pb_arg decode rename_op_input rename_op_output op extend get_pb_arg_vali append bool enumerate external_output external_input output op zip input range len deepcopy get_ssa get_producer_map rename_op_output _rename_versioned_blob_in_proto get_ssa _rename_versioned_blob_in_proto get_ssa sum format get_all_paths get_producer_map min from_ssa warning get_consumer_map get_ssa op _get_dependency_chain append enumerate join get_ssa format all deepcopy remove get_sub_graph_external_input_output rename_op_output extend info append identify_reshape_sub_graph _fuse_once get_ssa list all extend reversed add set get_consumer_map enumerate isinstance arange grid_sample size expand stack device to split int arange chunk _do_paste_mask device ceil tensor to zeros len fromarray max uint8 min from_numpy resize zeros to numpy array float new_zeros zeros_like nms view size tolist new_zeros nms_rotated min clone to max _output_size tuple meshgrid reshape arange NAME clamp sqrt log2 floor epsilon cat cat pred_boxes isinstance has Instances scale Tensor float proposal_boxes image_size clip expand int min numel NAME ShapeSpec enumerate IN_FEATURES OUT_CHANNELS FPN build_resnet_backbone FPN channels build_resnet_backbone IN_FEATURES OUT_CHANNELS STEM_OUT_CHANNELS make_stage WIDTH_PER_GROUP max DEFORM_NUM_GROUPS FREEZE_AT RES2_OUT_CHANNELS append range OUT_FEATURES DEPTH RES5_DILATION DEFORM_ON_PER_STAGE BasicStem enumerate NORM STRIDE_IN_1X1 DEFORM_MODULATED NUM_GROUPS DEVICE to META_ARCHITECTURE device zeros_like tolist argsort item append to shape reshape permute view NAME NAME arange Instances device tensor clip count all Boxes nonempty append cat isfinite zip batched_nms enumerate sort min full len ones Instances device image_size log cat len HEAD_NAME arange RotatedBoxes Instances device tensor clip count all nonempty batched_nms_rotated append cat isfinite zip enumerate sort min full len NAME all view reshape Boxes Instances nonzero batched_nms clip NAME put_scalar cross_entropy get_event_storage view gt_keypoints squeeze numel shape cat append tensor to to_heatmap len detach heatmaps_to_keypoints zip cat split put_scalar arange get_event_storage put_image size numel sigmoid stack binary_cross_entropy_with_logits item append to max cat enumerate arange sigmoid zip cat split NAME NAME append squeeze gt_classes put_scalar get_event_storage numel mean unsqueeze any append tensor all view reshape RotatedBoxes Instances batched_nms_rotated nonzero clip replace join resource_filename merge_from_file load build_model get_cfg WEIGHTS get_checkpoint_url get_config_file clone type __name__ _create_gradient_clipper type CLIP_GRADIENTS _generate_optimizer_class_with_gradient_clipping WEIGHT_DECAY_NORM isinstance WEIGHT_DECAY_BIAS SGD add named_parameters BASE_LR modules maybe_add_gradient_clipping BIAS_LR_FACTOR WEIGHT_DECAY LR_SCHEDULER_NAME min area where clamp_ prod zeros max max min area clamp long argmax arange view exp_ clamp squeeze new_zeros ceil float sum max range frPyObjects merge from_numpy deepcopy max polygons_to_bitmask items list isinstance log_first_n extend append WARN tensor get_fields Tensor update pop isinstance training train join sorted format check_output strip set isfile append split get str list defaultdict items detect_compute_compatibility get_env_module device_count __version__ is_available tabulate range append origin randint len barrier get_world_size format getLogger from_buffer dumps get_rank warning get_backend device to len get_world_size all_gather tensor max zeros cat _serialize_to_tensor _get_global_gloo_group loads zip append max _pad_to_largest_tensor max zip _get_global_gloo_group loads get_rank append _serialize_to_tensor _pad_to_largest_tensor randint all_gather get_world_size seed int set_rng_state format from_bytes get_state getLogger strftime getpid info urandom spec_from_file_location exec_module module_from_spec get int setUseOpenCL get setup_custom_environment _configure_libraries endswith setup_environment _import_file import_module setFormatter join format _cached_log_stream getLogger addHandler StreamHandler Formatter mkdirs _ColorfulFormatter dirname colored DEBUG setLevel f_back _getframe f_code _find_caller log isinstance _find_caller log get time _find_caller log tuple tabulate zip print format getattr unescape lower warning reference split_explicit_title add_role connect add_transform add_domain add_config_value merge_from_file add_boundary_preserving_config config_file get_cfg merge_from_list default_setup opts freeze update test_with_TTA verify_results setup resume_or_load build_model test WEIGHTS ENABLED Trainer register_hooks eval_only is_main_process CN size sum view dice_loss_func clamp sigmoid requires_grad_ conv2d unsqueeze interpolate binary_cross_entropy_with_logits put_scalar arange get_event_storage put_image size numel sigmoid stack boundary_loss_func binary_cross_entropy_with_logits item append to max cat enumerate items list add_parser ArgumentParser set_defaults add_subparsers create_argument_parser setup_logger func setLevel parse_args verbosity_to_level get format timer info add_densepose_config setup_logger add_dataset_category_config set_strict_kwargs_checking CN CN kaiming_normal_ constant_ named_parameters NAME DensePosePredictor DensePoseDataFilter DensePoseOutput locals len clamp float min unbind _linear_interpolation_utilities unbind arange grid_sample size expand stack u gt_densepose hasattr y view list clone i unsqueeze full_like zip append v tensor range x len _extract_single_tensors_from_matches_one_image size len extend long cat enumerate size append to sum DataForMaskLoss cat DensePoseLosses append tolist range len getLogger _evaluate_predictions_on_coco_gpsm warn _evaluate_predictions_on_coco_gps create_small_table info accumulate summarize DensePoseCocoEval evaluate accumulate summarize DensePoseCocoEval evaluate get_world_size IMS_PER_BATCH get get items list sorted keys enumerate MIN_KEYPOINTS_PER_IMAGE COARSE_SEGM_TRAINED_BY_MASKS _maybe_create_specific_keep_instance_predicate _maybe_create_general_keep_instance_predicate _maybe_create_general_keep_instance_predicate get append get items list format whitelisted_categories getLogger info get category_map list format items getLogger info get list thing_classes _add_category_id_to_contiguous_id_maps_to_metadata print_instances_class_histogram from_iterable _map_category_id_to_contiguous_id load_proposals_into_dataset zip append _maybe_filter_and_map_categories len _add_category_maps_to_metadata combine_detection_dataset_dicts _add_category_whitelists_to_metadata _add_category_maps_to_metadata combine_detection_dataset_dicts _add_category_whitelists_to_metadata FirstKFramesSelector STRATEGY RandomKFramesSelector FrameSelectionStrategy NUM_IMAGES LastKFramesSelector _compute_num_images_per_worker next extend str getLogger RandomRotation ROTATION_ANGLES info append next range iter device fsdecode is_relative_local_path register maybe_prepend_base_path set format info getLogger Timer seconds get format info getLogger XYWH_ABS enumerate get join _maybe_add_segm _maybe_add_bbox create_video_frame_mapping _maybe_add_densepose _maybe_add_keypoints zip append get update defaultdict set _combine_images_with_annotations getLogger _verify_annotations_have_unique_ids _load_coco_annotations _add_categories_metadata images_root annotations_fpath name register_dataset empty grid_sample tuple affine_grid astype maximum clone pad repeat interpolate float numpy range len abs_sin apply_box abs_cos densepose_transform_src get_local_path numpy clip numpy clip N_PART_LABELS numpy clip has clone has error isinstance getLogger join _get_base_config_dir relpath startswith splitext append listdir merge_from_file join _get_base_config_dir add_densepose_config get_cfg add_dataset_category_config _get_model_config _get_model_config BitMasks rand Boxes Instances to exp byte linspace meshgrid float range append unlink name _create_video_frames add_pointrend_config CN CropTransform apply_segmentation get_transform apply_image shape unique get_crop_size randint range squeeze unsqueeze grid_sample Size tensor affine_grid int arange view rand point_sample uncertainty_func cat float min shape zeros to transpose get_point_coords_wrt_image append tensor enumerate cat split put_scalar arange size numel binary_cross_entropy_with_logits to cat NAME unsqueeze clone add_tensormask_config view new_full int all zeros_like size min max cat tolist empty_like paste_masks_in_image unique append tensor cat clip pred_boxes Instances nonempty _paste_mask_lists_in_image image_size CN add_tridentnet_config CN append block_class range convert_frozen_batchnorm STEM_OUT_CHANNELS BRANCH_DILATIONS WIDTH_PER_GROUP max TRIDENT_STAGE DEFORM_NUM_GROUPS FREEZE_AT RES2_OUT_CHANNELS append freeze range OUT_FEATURES DEPTH RES5_DILATION DEFORM_ON_PER_STAGE BasicStem enumerate pop NORM STRIDE_IN_1X1 TEST_BRANCH_IDX parameters DEFORM_MODULATED NUM_BRANCH NUM_GROUPS scores pred_classes append tensor batched_nms range cat len merge_from_file get_cfg get_config_file zeros norm array range append tolist asarray uncompressed_rle min nonzero encode max sum arange grid_sample from_numpy shape meshgrid to append randn clamp rand Boxes benchmark manual_seed is_available cat range equal benchmark BitMasks rand Boxes Instances to BitMasks rand Boxes Instances to is_available stack benchmark append load str format info build_model build_detection_test_loader Counter WEIGHTS mean eval num_inputs zip append trange sum std flop_count_operators values load str format info build_model build_detection_test_loader Counter WEIGHTS activation_count_operators mean eval num_inputs zip append trange sum std values parameter_count_table info build_model str info build_model Timer format setup IMS_PER_BATCH build_detection_train_loader total reset available iter info seconds trange next range virtual_memory load list format setup build_model build_optimizer islice f WEIGHTS build_detection_train_loader DetectionCheckpointer DistributedDataParallel defrost register_hooks info SimpleTrainer train load Timer list format setup build_model model islice build_detection_test_loader WEIGHTS eval defrost info seconds range evaluator_type join COCOPanopticEvaluator COCOEvaluator SemSegEvaluator append OUTPUT_DIR join format print_csv_format get_evaluator inference_on_dataset build_detection_test_loader OrderedDict info OUTPUT_DIR TEST is_main_process get PeriodicCheckpointer build_lr_scheduler CHECKPOINT_PERIOD format MAX_ITER build_optimizer build_detection_train_loader DetectionCheckpointer info train OUTPUT_DIR format DistributedDataParallel do_train info add_argument ArgumentParser show join format print waitKey imshow save asarray XYWH_ABS reshape convert Boxes Instances XYXY_ABS add_export_config | # BMaskR-CNN This code is developed on [*Detectron2*](https://github.com/facebookresearch/detectron2) > [**Boundary-preserving Mask R-CNN**](https://arxiv.org/abs/2007.08921) > *ECCV 2020* > Tianheng Cheng, Xinggang Wang, Lichao Huang, Wenyu Liu <div align="center"> <img src="./projects/BMaskR-CNN/figures/demo.gif" width="100%" /> </div> *Video from [Cam看世界](https://youtube.com/channel/UCBaNgnYE2i5jqgr7EFfP9zQ)* on Youtube. ## Abstract | 2,344 |
hutao568/east | ['optical character recognition', 'scene text detection', 'curved text detection'] | ['EAST: An Efficient and Accurate Scene Text Detector'] | lanms/__init__.py models/east.py utils/utils.py data/augment.py lanms/__main__.py data/geo_map_cython_lib/setup.py loss.py train.py lanms/.ycm_extra_conf.py pyicdartools/evaluation.py inference.py models/__init__.py utils/__init__.py models/east_resnet18.py utils/restore.py pyicdartools/rrc_evaluation_funcs.py models/east_onnx.py models/to_onnx.py config.py data/icdar.py get_images bf_resize_image draw sort_poly preprocess detect resize_image predict LossFunc ohem_batch ohem_single dice_coefficient set_learning_rate val parse print_args update_args main train val_f1 RandomRotate Compose HeightJitter Noise ColorJitter get_images load_annoataion line_verticle shrink_poly polygon_area crop_area point_dist_to_line fit_line line_cross_point generate_rbox check_and_validate_polys collate_fn ICDAR sort_rectangle rectangle_from_parallelogram GetCompilationInfoForFile IsHeaderFile MakeRelativePathsInFlagsAbsolute FlagsForFile DirectoryOfThisScript merge_quadrangle_n9 ResNet resnet50 East Bottleneck conv3x3 mean_image_subtraction resnet18 BasicBlock ResNet resnet50 East Bottleneck conv3x3 mean_image_subtraction resnet18 BasicBlock ResNet resnet50 Bottleneck conv3x3 mean_image_subtraction East_Resnet18 resnet18 BasicBlock parse evaluate_method default_evaluation_params evaluation_imports eval validate_data validate_point_inside_bounds load_zip_file_keys validate_clockwise_points validate_lines_in_file load_res_json load_gt_json decode_utf8 print_help main_validation get_tl_line_values get_tl_line_values_from_file_contents validate_tl_line main_evaluation restore_rectangle_rbox restore_rectangle init_weights get_images save_checkpoint AverageMeter glob join format extend int shape resize float max shape float resize bf_resize_image isinstance tuple transpose astype float32 from_numpy unsqueeze resize_image cuda time format zeros_like print reshape fillPoly astype argwhere int32 zeros restore_rectangle merge_quadrangle_n9 enumerate sum argmin format print close eval open imwrite polylines astype copy int32 int sort reshape min item float sum append float range ohem_single sum param_groups warm_up cos pi pow lr epochs model zero_grad tensorboardX localtime cuda len strftime update set_learning_rate format size item enumerate time criterion backward print AverageMeter step add_scalar print eval AverageMeter time get_images val_dir eval val_annotation predict checkpoint_dir tensorboardX DataLoader save copy2 cuda Adam epochs val_dir device_count load_state_dict range train_dir format init_weights start_epoch net checkpoint load join gpus print add_scalar train_annotation parameters val_annotation train val_f1 ICDAR makedirs print format dir add_argument ArgumentParser dir append append clip polygon_area zip arctan2 min astype choice shape int32 zeros range max clip polyfit print norm arccos line_verticle fit_line dot line_cross_point sum arctan print argmin argmax norm ones fillPoly min argmin fit_line sort_rectangle line_cross_point gen_geo_map argwhere zip append zeros sum array range rectangle_from_parallelogram enumerate list permute zip append range len append join startswith IsHeaderFile compiler_flags_ exists compiler_flags_ GetCompilationInfoForFile compiler_working_dir_ MakeRelativePathsInFlagsAbsolute DirectoryOfThisScript nms_impl array copy load_url ResNet load_state_dict load_url ResNet load_state_dict range load load_res_json validate_lines_in_file load_gt_json compute_ap area list decode_utf8 append polygon_from_points range import_module get_intersection_over_union float empty get_tl_line_values_from_file_contents items namedtuple load_gt_json load_res_json rectangle_to_polygon int8 Rectangle get_intersection zeros len write exit group match namelist append ZipFile dict dict decode BOM_UTF8 replace startswith encode validate_tl_line decode_utf8 replace split get_tl_line_values validate_point_inside_bounds int replace validate_clockwise_points group match float replace argsort append get_tl_line_values split update default_evaluation_params_fn validate_data_fn writestr items print write dumps close dict print_help evaluate_method_fn ZipFile makedirs update default_evaluation_params_fn validate_data_fn print exit dict concatenate reshape transpose zeros array str join copy save makedirs print format apply | hutao568/east | 2,345 |
huylenguyen806/vnasr | ['speech enhancement'] | ['SEGAN: Speech Enhancement Generative Adversarial Network'] | scripts/create_tfrecords.py tensorflow_asr/utils/file_util.py tensorflow_asr/utils/feature_util.py tensorflow_asr/models/transducer/contextnet.py tensorflow_asr/losses/rnnt_loss.py tensorflow_asr/augmentations/methods/base_method.py examples/deepspeech2/train.py examples/deepspeech2/tflite.py tensorflow_asr/models/transducer/rnn_transducer.py examples/rnn_transducer/train.py tests/featurizer/test_sentencepiece.py scripts/generate_vocab_subwords.py scripts/generate_metadata.py examples/contextnet/tflite.py tensorflow_asr/models/layers/row_conv_1d.py tensorflow_asr/configs/config.py examples/deepspeech2/test.py tensorflow_asr/models/encoders/contextnet.py examples/conformer/inference/run_tflite_model.py examples/rnn_transducer/tflite.py tensorflow_asr/models/layers/point_wise_ffn.py examples/demonstration/rnn_transducer.py tensorflow_asr/models/encoders/conformer.py tensorflow_asr/models/layers/bnlstmcell.py tests/rnn_transducer/test_rnn_transducer.py examples/conformer/inference/gen_tflite_model.py examples/contextnet/test.py tensorflow_asr/featurizers/methods/gammatone.py tests/deepspeech2/test_ds2.py setup.py tensorflow_asr/featurizers/speech_featurizers.py tensorflow_asr/utils/env_util.py examples/conformer/test.py tensorflow_asr/utils/data_util.py tensorflow_asr/losses/ctc_loss.py examples/conformer/inference/run_saved_model.py tensorflow_asr/datasets/asr_dataset.py tensorflow_asr/models/ctc/deepspeech2.py tensorflow_asr/metrics/error_rates.py scripts/create_vocab_from_trans.py tensorflow_asr/utils/layer_util.py tensorflow_asr/models/ctc/jasper.py tensorflow_asr/models/transducer/base_transducer.py tensorflow_asr/utils/metric_util.py tensorflow_asr/augmentations/augmentation.py tensorflow_asr/models/layers/subsampling.py tensorflow_asr/datasets/base_dataset.py examples/jasper/test.py tensorflow_asr/optimizers/accumulation.py tests/losses/test_rnnt_loss.py tensorflow_asr/models/layers/multihead_attention.py examples/contextnet/train.py tensorflow_asr/augmentations/methods/specaugment.py tensorflow_asr/utils/shape_util.py scripts/generate_vocab_sentencepiece.py tensorflow_asr/models/layers/sequence_wise_bn.py tensorflow_asr/utils/app_util.py examples/jasper/tflite.py tensorflow_asr/optimizers/schedules.py tensorflow_asr/models/layers/embedding.py tests/jasper/test_jasper.py examples/demonstration/tflite_conformer.py examples/jasper/train.py scripts/saved_model_to_weights.py tensorflow_asr/utils/math_util.py scripts/create_mls_trans.py tensorflow_asr/models/layers/positional_encoding.py tests/conformer/test_conformer.py tensorflow_asr/models/activations/glu.py examples/conformer/inference/gen_saved_model.py examples/demonstration/conformer.py scripts/create_librispeech_trans.py examples/rnn_transducer/test.py tensorflow_asr/models/ctc/base_ctc.py tensorflow_asr/models/base_model.py tensorflow_asr/models/transducer/conformer.py examples/conformer/train.py tensorflow_asr/featurizers/text_featurizers.py examples/demonstration/streaming_tflite_conformer.py tests/contextnet/test_contextnet.py parse_requirements ConformerModule recognizer int_or_str send prepare_split make_alphabet_file Augmentation AugmentationMethod TimeMasking FreqMasking Config DecoderConfig LearningConfig RunningConfig DatasetConfig ASRTFRecordDataset ASRDataset ASRSliceDataset BaseDataset preemphasis normalize_signal tf_normalize_signal tf_read_raw_audio normalize_audio_feature TFSpeechFeaturizer tf_normalize_audio_features tf_preemphasis depreemphasis tf_depreemphasis SpeechFeaturizer slice_signal merge_slices tf_merge_slices NumpySpeechFeaturizer load_and_convert_to_wav read_raw_audio SentencePieceFeaturizer SubwordFeaturizer TextFeaturizer CharFeaturizer fft_weights erb_point make_erb_filters erb_space ctc_loss CtcLoss rnnt_loss_warprnnt transition_probs RnntLoss reduce_logsumexp nan_to_zero compute_rnnt_loss_and_grad_helper backward_dp extract_diagonals forward_dp rnnt_loss_tf rnnt_loss ErrorRate BaseModel GLU CtcModel RnnBlock DeepSpeech2 Reshape RnnModule ConvModule FcModule DeepSpeech2Encoder FcBlock ConvBlock Jasper JasperBlock Reshape JasperEncoder JasperResidual JasperSubBlockResidual JasperSubBlock ConvModule ConformerBlock ConformerEncoder MHSAModule FFModule Reshape ConvModule get_activation ContextNetEncoder ConvBlock SEModule ds2_rnn_batch_norm BNLSTMCell Embedding MultiHeadAttention RelPositionMultiHeadAttention PointWiseFFN PositionalEncodingConcat PositionalEncoding RowConv1D SequenceBatchNorm TimeReduction Conv2dSubsampling VggSubsampling Transducer TransducerPrediction TransducerJoint TransducerJointReshape Conformer ContextNet RnnTransducerBlock RnnTransducerEncoder RnnTransducer Reshape GradientAccumulation CyclicTransformerSchedule SANSchedule BoundExponentialDecay TransformerSchedule evaluate_results create_inputs create_labels create_logits setup_environment setup_strategy setup_tpu has_devices setup_devices float_feature bytestring_feature int64_feature load_yaml is_cloud_path is_hdf5_filepath save_file read_file preprocess_paths get_conv get_rnn pad_prediction_tfarray nan_to_zero count_non_blank merge_repeated find_max_length_prediction_tfarray get_reduced_length log10 get_num_batches merge_two_last_dims bytes_to_string execute_cer tf_cer wer cer execute_wer get_float_spec get_shape_invariants shape_list test_conformer test_contextnet test_ds2 test_featurizer test_encoder test_iextract test_jasper WarpRNNTTest test_streaming_transducer pop defaultdict strip startswith append get blank get_input_details ones print resize_tensor_input get_output_details Interpreter recognize allocate_tensors zeros join print load expanduser load read BytesIO ndarray isinstance ValueError resample mean asfortranarray expanduser resample decode_wav int list pad zip append range mean sqrt var sqrt reduce_mean reduce_variance abs max abs reduce_max expand_dims zeros range make_erb_filters float32 complex64 sqrt pad cast abs conj exp log ones_like exp cos complex64 sqrt stack cast sin abs log_softmax warp_rnnt_loss reduce_max nan_to_zero pad reverse matrix_diag_part_v2 multiply reduce_sum concat transpose scan reverse matrix_diag_part_v2 nan_to_zero one_hot sequence_mask transpose scan reverse expand_dims matrix_diag_part_v2 zeros_like concat where exp sequence_mask ones reduce_sum extract_diagonals gather_nd cast forward_dp expand_dims range transition_probs one_hot log_softmax backward_dp stack tile equal is_nan reshape scatter_nd lower batch_normalization concat moments split convert_to_tensor items list update_state tqdm info split simplefilter setLevel INFO list_physical_devices set_visible_devices info experimental_connect_to_cluster initialize_tpu_system TPUClusterResolver info setup_devices isinstance add_implicit_resolver list SafeLoader X compile isinstance makedirs splitext splitext constant log shape_list while_loop reshape constant list len set dict zip range bytes_to_string split bytes_to_string zip row_lengths bytes_split edit_distance cast to_sparse as_list shape shape_list get_shape_invariants Config blank Interpreter invoke set_tensor speech_config add_featurizers resize_tensor_input shape CharFeaturizer normal get_tensor Conformer TFSpeechFeaturizer info allocate_tensors make constant get_concrete_function get_input_details convert get_output_details summary zeros from_concrete_functions decoder_config Config blank Interpreter invoke set_tensor speech_config add_featurizers resize_tensor_input shape CharFeaturizer normal get_tensor TFSpeechFeaturizer info allocate_tensors make constant ContextNet get_concrete_function get_input_details convert get_output_details summary zeros from_concrete_functions decoder_config Config make DeepSpeech2 info get_concrete_function convert TFSpeechFeaturizer shape speech_config add_featurizers summary from_concrete_functions decoder_config CharFeaturizer load join encode_as_pieces decode_ids decode_pieces SentencePieceProcessor dirname abspath encode_as_ids pardir join TFSpeechFeaturizer get_data dirname abspath load_from_file pardir join ASRSliceTestDataset create decode print search TFSpeechFeaturizer iextract lower sub dirname abspath iter next load_from_file Config make info get_concrete_function convert TFSpeechFeaturizer shape from_concrete_functions speech_config add_featurizers summary Jasper decoder_config CharFeaturizer Config blank Interpreter invoke set_tensor speech_config RnnTransducer add_featurizers resize_tensor_input shape CharFeaturizer normal get_tensor TFSpeechFeaturizer info allocate_tensors make constant get_concrete_function get_input_details convert get_output_details summary zeros from_concrete_functions decoder_config | <h1 align="center"> <p>TensorFlowASR :zap:</p> <p align="center"> <a href="https://github.com/TensorSpeech/TensorFlowASR/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/TensorSpeech/TensorFlowASR?logo=apache&logoColor=green"> </a> <img alt="python" src="https://img.shields.io/badge/python-%3E%3D3.6-blue?logo=python"> <img alt="tensorflow" src="https://img.shields.io/badge/tensorflow-%3E%3D2.5.1-orange?logo=tensorflow"> <a href="https://pypi.org/project/TensorFlowASR/"> <img alt="PyPI" src="https://img.shields.io/pypi/v/TensorFlowASR?color=%234285F4&label=release&logo=pypi&logoColor=%234285F4"> | 2,346 |
hvcl/scribble2label | ['cell segmentation'] | ['Scribble2Label: Scribble-Supervised Cell Segmentation via Self-Generating Pseudo-Labels with Consistency'] | scripts/optimizer.py scripts/metric.py scripts/tb_utils.py Train.py scripts/dataset.py Inference.py preprocess.py scripts/utils.py scripts/metric_mdice.py Learner.py config inference_image inference Learner get_domimant_colors scribblize read_image_labels get_images_details remove_corner image_ids_in read_image config dsbDataset get_transforms dsbTestDataset AverageMeter Evaluator Evaluator RAdam init_tb_logger seed_everything init_logger device join zip astype tqdm inference_image add_pred reset save device to numpy add_batch append listdir print imread format rgb2hsv format uint16 concatenate shape zeros range read_image arange reshape KMeans fit astype labels_ histogram unique len get_domimant_colors read_image_labels reshape shape append corner_peaks int list astype remove_corner skeletonize sample label corner_harris abs max range Compose SummaryWriter Path seed str manual_seed stdout setFormatter getLogger addHandler StreamHandler Formatter mkdir Path setLevel INFO FileHandler | # Scribble2Label: Scribble-Supervised Cell Segmentation via Self-Generating Pseudo-Labels with Consistency This is official PyTorch implementation of Scribble2Label (MICCAI 2020). For technical details, please refer to: ___ **Scribble2Label: Scribble-Supervised Cell Segmentation via Self-Generating Pseudo-Labels with Consistency** [Hyeonsoo Lee](https://scholar.google.com/citations?user=BV-AwjoAAAAJ&hl=ko&authuser=2), [Won-ki Jeong](https://scholar.google.co.kr/citations?user=bnyKqkwAAAAJ&hl=ko) **[[Paper](https://arxiv.org/abs/2006.12890)]** **MICCAI 2020**  - Segmentation is a fundamental process in microscopic cell image analysis. With the advent of recent advances in deep learning, more accurate and high-throughput cell segmentation has become feasible. | 2,347 |
hwkim94/ybigta_project | ['style transfer'] | ['Artistic style transfer for videos'] | SparklingEmoticana/sparkling_emoticana/CNN_spark_submit.py video style transfer/style_transfer.py CNN cutting load_wav_data onehot_encoding making_pair get_noise_image style_layer_loss pool_layer write_image_output get_style_images get_prev_frame maybe_make_directory get_content_weights get_bias write_video_output read_image get_optimizer write_image sum_masked_style_losses sum_shortterm_temporal_losses warp_image parse_args get_weights sum_longterm_temporal_losses conv_layer normalize convert_to_original_colors render_single_image postprocess relu_layer stylize build_model preprocess sum_style_losses check_image get_longterm_weights render_video main temporal_loss read_weights_file minimize_with_lbfgs get_content_frame read_flow_file content_layer_loss get_init_image get_prev_warped_frame gram_matrix sum_content_losses get_content_image minimize_with_adam get_mask_image mask_style_layer load int shuffle melspectrogram listdir append append int zeros normalize video_output_dir style_layer_weights add_argument style_imgs_weights maybe_make_directory img_output_dir ArgumentParser content_layer_weights video model_weights relu_layer print Variable pool_layer shape zeros conv_layer loadmat print conv2d format get_shape print get_shape format relu print get_shape avg_pool format constant reshape size constant pow get_shape value reduce_sum get_shape value reduce_sum gram_matrix pow reshape transpose matmul convert_to_tensor get_shape value multiply stack get_mask_image append expand_dims range convert_to_tensor style_layers zip style_layer_weights style_imgs_weights assign mask_style_layer style_mask_imgs run convert_to_tensor style_layers style_layer_weights style_imgs_weights assign zip run convert_to_tensor content_layers assign zip content_layer_weights run size float32 reduce_sum cast float l2_loss maximum get_content_weights prev_frame_indices range get_prev_warped_frame assign prev_frame_indices get_longterm_weights range run get_prev_warped_frame assign get_content_weights temporal_loss run astype float32 IMREAD_COLOR preprocess check_image imread postprocess imwrite copy astype copy list readlines len map float32 dstack zeros array range split sum makedirs minimize print assign global_variables_initializer run format minimize print assign eval global_variables_initializer run AdamOptimizer print_iterations learning_rate ScipyOptimizerInterface join format write_image video_output_dir zfill style_layers style_imgs_weights maybe_make_directory open str write_image img_name init_img_type style_weight content_layers format max_iterations close zip max_size tv_weight optimizer join style_imgs content_img write style_mask_imgs content_weight img_output_dir get_prev_frame get_prev_warped_frame get_noise_image noise_ratio join format video_input_dir zfill read_image join content_img_dir float astype float32 IMREAD_COLOR shape preprocess check_image resize max_size imread join style_imgs astype float32 IMREAD_COLOR shape preprocess check_image resize append imread style_imgs_dir seed astype float32 join content_img_dir astype float32 IMREAD_GRAYSCALE check_image resize imread amax join format video_output_dir zfill IMREAD_COLOR check_image imread str join format read_flow_file video_input_dir astype float32 get_prev_frame preprocess str join format video_input_dir read_weights_file remap shape zeros float range postprocess COLOR_LUV2BGR COLOR_YCR_CB2BGR COLOR_BGR2YUV astype float32 COLOR_BGR2LAB COLOR_BGR2YCR_CB merge preprocess COLOR_BGR2LUV COLOR_LAB2BGR COLOR_YUV2BGR cvtColor split get_content_image content_img get_style_images end_frame range start_frame parse_args render_single_image render_video video | # ybigta_project ## kaggle - 2017년 여름방학 신입기수 데이터분석 프로젝트 - 주제 : 집값 예측(regression) - 모델 : RandomForest, Linear Regression, Adaboost 등을 이용한 Ensemble - 데이터 : https://www.kaggle.com/c/house-prices-advanced-regression-techniques ## travel_ET - 2017년 2학기 컨퍼런스 - 주제 - 여행지 추천 | 2,348 |
hworang77/PAN | ['hard attention'] | ['Progressive Attention Networks for Visual Attribute Prediction'] | train.py libs/spatialatt.py libs/batch_norm.py libs/theano_util.py libs/spatialelem.py gengerate_mdist.py libs/confusionmatrix.py libs/mnist_util.py pack.py generateSimpleColorQuestionInfo renderImage BatchNormLayer batch_norm ConfusionMatrix read SpatialAttentionLayer SpatialElemwiseMergeLayer satt satt2 batch_imresize unzipp shuffle set multinomial append argmax range len fromarray normal int uint8 imresize convert astype uniform any paste save append zeros float round array range len getattr identity hasattr join range len OrderedDict get_value int imresize transpose shape zeros range FlattenLayer DropoutLayer BatchNormLayer add SpatialElemwiseMergeLayer conv NonlinearityLayer EmbeddingLayer ReshapeLayer FlattenLayer DropoutLayer BatchNormLayer add SpatialElemwiseMergeLayer conv NonlinearityLayer EmbeddingLayer SpatialAttentionLayer ReshapeLayer | # Progressive Attention Networks for Visual Attribute Prediction This implements a network introduced in the arXiv paper: [Hierarchical Attention Networks](https://arxiv.org/abs/1606.02393) You first need to place MNIST dataset in ./data/mnist/ - generate_mdist.py generates the MDIST synthetic dataset. - pack.py packs the generated images and labels into a .npz file. - train.py trains a progressive attention network with local context of size 3x3 and tests it at every epoch. If you're using this code in a publication, please cite our paper. @article{seo2016hierarchical, title={Hierarchical Attention Networks}, author={Seo, Paul Hongsuck and Lin, Zhe and Cohen, Scott and Shen, Xiaohui and Han, Bohyung}, | 2,349 |
hxstarklin/DSAN | ['future prediction'] | ['Preserving Dynamic Attention for Long-Term Spatial-Temporal Prediction'] | train.py utils/Metrics.py main_4gpus.py data_parameters.py main_1gpu.py utils/CordinateGenerator.py utils/CustomSchedule.py utils/tools.py models.py utils/DataLoader.py utils/EarlystopHelper.py remove_oldfiles remove_oldfiles CordinateGenerator CustomSchedule EarlystopHelper MAPE RMSE MAE rmtree remove format | # DSAN (Dynamic Switch-Attention Network) [1] Haoxing Lin, Rufan Bai, Weijia Jia, Xinyu Yang, Yongjian You. Preserving Dynamic Attention for Long-Term Spatial-Temporal Prediction. KDD 2020. ACM Digital Library: https://dl.acm.org/doi/10.1145/3394486.3403046 @inproceedings{dsan, title = {Preserving Dynamic Attention for Long-Term Spatial-Temporal Prediction}, author = {Lin, Haoxing and Bai, Rufan and Jia, Weijia and Yang, Xinyu and You, Yongjian}, year = {2020}, doi = {10.1145/3394486.3403046}, booktitle = {Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining}, pages = {36–46}, | 2,350 |
hyenal/relate | ['scene generation'] | ['RELATE: Physically Plausible Multi-Object Scene Synthesis Using Structured Latent Spaces'] | dataset/clevr.py tools/stats.py tools/attr_dict.py optimizer/__init__.py models/relate_helpers.py models/__init__.py dataset/clevr_obc.py tools/utils.py tools/model_io.py dataset/dummy.py models/relate_static.py dataset/shapestack.py main.py models/relate_video.py dataset/cars_real_traffic.py extras/frechet_video_distance.py models/relate_static_order.py extras/compute_fvd.py tools/resize.py dataset/__init__.py init_model trainvalidate MainConfig run CarsRealTraffic CLEVR_v1 CLEVROBC Dummy ShapeStack init_dataset main create_id3_embedding preprocess _is_in_graph calculate_fvd AdaIn NPE AdaIngen_bg AdaIngen_obj GAN_disc RELATEStat GAN_gen GAN_disc GAN_gen RELATEStatOrder GAN_disc RELATEVideo GAN_gen init_optimizer AttrDict nested_attr_dict save_model load_model get_checkpoint find_last_checkpoint get_stats_path get_optimizer_path get_model_path load_stats purge_epoch resize AverageMeter Stats make_image get_visdom_connection choice_exclude image_tensor is_sequence toimage bytescale get_net_input auto_init_args weight_init save_gif truncated_normal_ has_method save_image get_visdom_env gradient_penalty path_to_last get_checkpoint visdom_server synchronize_logged_vars hasattr load_model visdom_port load_state_dict log_vars resume_epoch find_last_checkpoint Stats get_visdom_env deepcopy join print extend exp_dir clear_optimizer seed str epoch init_optimizer gpu_idx init_dataset print init_model exp_dir DataLoader eval_only manual_seed is_available cuda range makedirs update visualize time model print backward num_iter zero_grad get_net_input eval train step range enumerate len join list dir1 print concatenate dir0 tqdm range exists len as_list list reshape resize_bilinear float32 cast get_tensor_by_name int i3d_model replace Module get_tensor_by_name get items list Adagrad _get_param_groups print zero_grad Adam SGD map MultiStepLR load_state_dict zip append items list AttrDict load open dump print get_stats_path get_optimizer_path get_model_path save open state_dict load get_stats_path get_optimizer_path get_model_path load_stats isfile join glob join sorted print remove get_checkpoint isfile join LANCZOS splitext isfile save listdir open exp_dir visdom_env Visdom affine isinstance fill_ normal_ InstanceNorm2d truncated_normal_ weight Linear squeeze add_ copy_ shape normal_ __class__ cuda f_back setattr f_locals getattr AttrDict co_argcount std view Variable size rand mean cuda is_available dim make_image save clamp size expand flatnonzero list asarray frombytes bytescale iscomplexobj astype float32 uint32 tostring shape putpalette amin ravel amax len float min max ones size len copy_ enumerate image_tensor clamp mimsave cpu numpy append | # RELATE: Physically Plausible Multi-Object Scene Synthesis Using Structured Latent Spaces This repository contains the official implementation of **[RELATE: Physically Plausible Multi-Object Scene Synthesis Using Structured Latent Spaces](https://arxiv.org/abs/2007.01272)** It provides the training and evaluation scripts as well as the datasets to reproduce results of the paper. <img alt="RELATE splash" src="docs/splash-figure.png"> ## Installation The dependencies of this codebase are managed via [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/download.html): ```bash # build environment conda env create -f environment.yml | 2,351 |
hyeongyuy/CT-CYCLE_IDNETITY_GAN_tensorflow | ['denoising'] | ['Cycle Consistent Adversarial Denoising Network for Multiphase Coronary CT Angiography'] | inout_util.py CYCLE_IDENTITY/code/cycle_identity_model.py CYCLE_IDENTITY/code/cycle_identity_module.py CYCLE_IDENTITY/code/main.py tf_psnr psnr DCMDataLoader ParseBoolean log10 ROI_img save_image ParseList cycle_identity generator lrelu conv2d batchnorm cycle_loss identity_loss discriminator least_square constant log reduce_mean mean subplots set_text astype float32 close imshow savefig lower | # CYCLE_IDENTITY_GAN-tensorflow Kang, E., Koo, H. J., Yang, D. H., Seo, J. B., & Ye, J. C. (2018). Cycle Consistent Adversarial Denoising Network for Multiphase Coronary CT Angiography. arXiv preprint arXiv:1806.09748.<br> * CYCLE_IDENTITY_GAN > * paper : https://arxiv.org/pdf/1806.09748.pdf > * reference code: > * cyclegan : https://github.com/xhujoy/CycleGAN-tensorflow ## I/O (DICOM file -> .npy) * Input data Directory * DICOM file extension = [<b>'.IMA'</b>, '.dcm'] > $ os.path.join(dcm_path, patient_no, [LDCT_path|NDCT_path], '*.' + extension) | 2,352 |
hyeonseob-nam/Batch-Instance-Normalization | ['style transfer'] | ['Batch-Instance Normalization for Adaptively Style-Invariant Neural Networks'] | models/batchinstancenorm.py utils/__init__.py utils/logger.py utils/visualize.py utils/progress/progress/helpers.py utils/progress/progress/__init__.py utils/progress/progress/spinner.py utils/progress/test_progress.py utils/misc.py utils/progress/progress/counter.py utils/progress/progress/bar.py utils/eval.py main.py models/resnet.py utils/progress/setup.py _BatchInstanceNorm BatchInstanceNorm3d BatchInstanceNorm1d BatchInstanceNorm2d ResNet conv3x3 BasicBlock Bottleneck accuracy plot_overlap savefig Logger LoggerMonitor init_params AverageMeter mkdir_p get_mean_and_std make_image show_mask_single show_mask gauss colorize show_batch sleep FillingSquaresBar FillingCirclesBar IncrementalBar ChargingBar ShadyBar PixelBar Bar Countdown Stack Counter Pie SigIntMixin WriteMixin WritelnMixin PieSpinner MoonSpinner Spinner PixelSpinner LineSpinner Progress Infinite topk size t eq mul_ expand_as append sum max asarray arange plot numbers enumerate len print DataLoader div_ zeros range len normal constant isinstance kaiming_normal Conv2d bias modules BatchNorm2d weight Linear makedirs numpy range zeros unsqueeze gauss show make_image imshow make_grid make_image subplot make_grid size clone axis upsampling imshow expand_as range make_image subplot make_grid size clone axis upsampling imshow expand_as cpu range len | # Batch-Instance-Normalization This repository provides an example of using [Batch-Instance Normalization (NIPS 2018)](https://arxiv.org/abs/1805.07925) for classification on CIFAR-10/100, written by [Hyeonseob Nam](https://hyeonseobnam.github.io/) and [Hyo-Eun Kim](https://www.linkedin.com/in/hekim0530/) at [Lunit Inc.](https://lunit.io/) Acknowledgement: This code is based on [Wei Yang's pytorch-classification](https://github.com/bearpaw/pytorch-classification) ## Citation If you use this code for your research, please cite: ``` @inproceedings{nam2018batch, title={Batch-Instance Normalization for Adaptively Style-Invariant Neural Networks}, author={Nam, Hyeonseob and Kim, Hyo-Eun}, booktitle={Advances in Neural Information Processing Systems}, | 2,353 |
hyliang96/CSGCNN | ['object localization'] | ['Training Interpretable Convolutional Neural Networks by Differentiating Class-specific Filters'] | VOCPart/train/summaries.py VOCPart/VOC/preprocess.py VOCPart/train/dataLoader.py VOCPart/train/run.py VOCPart/mutualinfo/mutualinfo.py VOCPart/activmap/activmap.py VOCPart/gradmap/gradmap.py VOCPart/train/resnet_std.py VOCPart/train/seed.py VOCPart/VOC/VOCPart.py VOCPart/VOC/__init__.py VOCPart/VOC/utils.py VOCPart/train/train.py take get_IoU dev_zero_replace Report proc_gradmap to_image IoU imgshow DataLoader ResNet resnet50 Bottleneck resnet152 conv3x3 resnet34 LearnableMaskLayer resnet18 BasicBlock resent resnet101 set_work_init_fn set_seed TensorboardSummary train_model freeze select_param symlink_force plot_mask crop np_save_resize_img load_annotations VOCPart int sort min numel float sum max dtype norm gauss GaussianBlur2d numel sqrt type round tensor repeat shape sum min max show print transpose add_subplot colorbar imshow figure numpy detach ResNet load_url load_state_dict LearnableMaskLayer expansion Linear ResNet load_url load_state_dict LearnableMaskLayer expansion Linear ResNet load_url load_state_dict LearnableMaskLayer expansion Linear ResNet load_url load_state_dict LearnableMaskLayer expansion Linear ResNet load_url load_state_dict LearnableMaskLayer expansion Linear seed str manual_seed_all manual_seed symlink data lambda_reg model update_checkpoint_link zero_grad get_density save max cuda exists clip_lmask load_state_dict state_dict use_gpu epoch format float enumerate load join time criterion backward print load_checkpoint Variable makedirs exp_dir ifmask train step add_scalar children list named_children print named_parameters enumerate len children list named_children print named_parameters append enumerate len fromarray show suptitle add_subplot axis set_window_title imshow figure shape max fromarray convert save resize append range | # Training Interpretable CNNs with class-specific filters ## Paper Paper with supplementary: [Haoyu Liang, etc. Training Interpretable Convolutional Neural Networks by Differentiating Class-specific Filters. ECCV 2020](https://arxiv.org/abs/2007.08194) The [slide](https://cloud.tsinghua.edu.cn/f/fa7b28ef8a344ca68ca0/?dl=1) and [video](https://cloud.tsinghua.edu.cn/f/939fb5e944f34aa8bb95/?dl=1) for ECCV 2020 oral. cite as ``` @inproceedings{liang2020training, title={Training Interpretable Convolutional Neural Networks by Differentiating Class-specific Filters}, author={Liang, Haoyu and Ouyang, Zhihao and Zeng, Yuyuan and Su, Hang and He, Zihao and Xia, Shu-Tao and Zhu, Jun and Zhang, Bo}, booktitle={European Conference on Computer Vision}, | 2,354 |
hysts/pytorch_mpiigaze | ['gaze estimation'] | ['Appearance-Based Gaze Estimation in the Wild', 'MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation', "It's Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation"] | gaze_estimation/optim.py gaze_estimation/gaze_estimator/head_pose_estimation/head_pose_normalizer.py gaze_estimation/logger.py gaze_estimation/types.py gaze_estimation/gaze_estimator/common/camera.py gaze_estimation/scheduler.py gaze_estimation/utils.py gaze_estimation/gaze_estimator/common/eye.py tools/preprocess_mpiifacegaze.py convert_to_onnx.py gaze_estimation/gaze_estimator/common/face_parts.py gaze_estimation/models/mpiifacegaze/alexnet.py gaze_estimation/__init__.py gaze_estimation/config/__init__.py train.py gaze_estimation/datasets/mpiigaze.py gaze_estimation/datasets/__init__.py gaze_estimation/gaze_estimator/__init__.py gaze_estimation/gaze_estimator/common/__init__.py gaze_estimation/models/mpiifacegaze/backbones/__init__.py evaluate.py gaze_estimation/gaze_estimator/head_pose_estimation/face_landmark_estimator.py gaze_estimation/config/config_node.py gaze_estimation/models/mpiigaze/lenet.py gaze_estimation/tensorboard.py gaze_estimation/models/mpiifacegaze/backbones/resnet_simple.py gaze_estimation/transforms.py gaze_estimation/gaze_estimator/common/face.py gaze_estimation/datasets/mpiifacegaze.py gaze_estimation/config/defaults.py gaze_estimation/gaze_estimator/common/face_model.py gaze_estimation/gaze_estimator/common/visualizer.py tools/capture_video.py demo.py gaze_estimation/gaze_estimator/gaze_estimator.py tools/preprocess_mpiigaze.py gaze_estimation/losses.py gaze_estimation/gaze_estimator/head_pose_estimation/__init__.py gaze_estimation/models/mpiigaze/resnet_preact.py gaze_estimation/dataloader.py gaze_estimation/models/__init__.py gaze_estimation/models/mpiifacegaze/resnet_simple.py main main Demo main test main train validate create_dataloader _create_handlers _create_file_handler create_logger _create_color_formatter _create_plain_formatter _create_stream_handler create_loss get_param_list create_optimizer create_scheduler DummyWriter create_tensorboard_writer _create_mpiifacegaze_transform _create_mpiigaze_transform create_transform LossType GazeEstimationMethod save_config convert_to_unit_vector AverageMeter compute_angle_error setup_cudnn set_seeds create_train_output_dir load_config ConfigNode get_default_config OnePersonDataset OnePersonDataset create_dataset GazeEstimator Camera Eye Face FaceModel FaceParts FacePartsName Visualizer LandmarkEstimator HeadPoseNormalizer _normalize_vector create_model Model Model Model create_backbone initialize_weights Model initialize_weights Model BasicBlock main create_timestamp add_mat_data_to_hdf5 main save_one_person convert_gaze convert_pose main get_eval_info merge_from_file config load create_model add_argument output_path eval get_default_config load_state_dict ArgumentParser device export parse_args weight zeros load_config Demo run mean eval device float cat save_config print stem create_dataloader test mkdir Path output_dir save numpy checkpoint model zero_grad device to update size mean avg info item enumerate add_image time make_grid backward AverageMeter loss_function step add_scalar time info add_histogram AverageMeter named_parameters eval avg device model_params add_scalar validate setup_cudnn create_logger set_seeds val_first seed create_scheduler create_tensorboard_writer epochs range create_optimizer close info create_loss Checkpointer create_train_output_dir train step DataLoader create_dataset _create_handlers getLogger addHandler DEBUG setLevel INFO join _create_file_handler _create_color_formatter _create_plain_formatter append _create_stream_handler split DEBUG setLevel setFormatter StreamHandler setFormatter as_posix DEBUG setLevel FileHandler loss append no_weight_decay_on_bn named_parameters get_param_list Adam SGD MultiStepLR CosineAnnealingLR use_tensorboard Lambda Compose mpiifacegaze_face_size Lambda Compose mpiifacegaze_gray auto auto seed manual_seed benchmark deterministic merge_from_file config options add_argument freeze merge_from_list get_default_config ArgumentParser parse_args mkdir Path output_dir exists sqrt cos sin convert_to_unit_vector int OnePersonDataset ConcatDataset len val_ratio Path dataset_dir create_transform auto lower import_module Model device to name import_module isinstance Conv2d xavier_uniform_ zeros_ bias weight Linear ones_ kaiming_normal_ BatchNorm2d now VideoCapture CAP_PROP_FRAME_HEIGHT as_posix VideoWriter VideoWriter_fourcc release waitKey imshow set CAP_PROP_FRAME_WIDTH read write output cap_size uint8 astype list tqdm add_mat_data_to_hdf5 dataset exists arcsin arctan2 arcsin arctan2 drop read_csv apply as_posix pose image sorted iterrows stem append day glob astype convert_pose get_eval_info uint8 convert_gaze float32 dict gaze loadmat array save_one_person | # An unofficial PyTorch implementation of MPIIGaze and MPIIFaceGaze [](https://opensource.org/licenses/MIT) [](https://github.com/hysts/pytorch_mpiigaze) [Here](https://github.com/hysts/pytorch_mpiigaze_demo) is a demo program. See also [this repo](https://github.com/hysts/pl_gaze_estimation). ## Requirements * Linux (Tested on Ubuntu only) * Python >= 3.7 ```bash pip install -r requirements.txt | 2,355 |
hyunOO/sda_exp | ['stochastic optimization'] | ['Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour'] | networks/efficientnet/model.py networks/__init__.py dataloader/dataloader.py networks/efficientnet/__init__.py networks/efficientnet/utils.py main.py dataloader/__init__.py validate metric_average accuracy main_worker main train imagenet_loader EfficientNet MBConvBlock Swish drop_connect get_model_params SwishImplementation round_repeats Conv2dStaticSamePadding Identity efficientnet_params MemoryEfficientSwish load_pretrained_weights BlockDecoder get_same_padding_conv2d round_filters Conv2dDynamicSamePadding efficientnet seed spawn print use_parallel device_count manual_seed main_worker parse_args use_distributed gpu workers validate batch_size SGD imagenet_loader MultiStepLR DataParallel DistributedDataParallel save cuda set_device use_parallel model_type from_name CrossEntropyLoss range SummaryWriter format init_process_group model_saving_dir lr startswith use_distributed int join add_scalar print set_epoch train epochs len format metric_average criterion model backward float size print zero_grad accuracy step cuda use_distributed enumerate len eval tensor reduce DistributedSampler Compose ImageFolder DataLoader Normalize int min_depth depth_divisor width_coefficient max depth_coefficient floor decode GlobalParams efficientnet efficientnet_params startswith _replace pop format print load_url load_state_dict | # SDA_EXP # Notes - The codes only support training on single node. (single GPU with single server, or multi-GPUs with single server) # Things to implement - Current ImageNet training uses warmup scheduler; Learning rate is increased from 0 to 0.256 in the first 5 epochs. (https://arxiv.org/abs/1706.02677) - Various augmentation techniques. # Things to verify - Do we need `average_gradients` before `optimizer.step()`? \ <https://github.com/pytorch/examples/issues/659> \ | 2,356 |
i-pi/i-pi | ['active learning'] | ['Uncertainty estimation for molecular dynamics and sampling'] | tools/py/paraweights.py ipi/engine/motion/atomswap.py examples/cp2k/nvt-cl/idtau_plot.py ipi/engine/motion/geop.py ipi/__init__.py tools/py/mergebeadspdb.py tools/py/getacf.py tools/py/kinetic2tag.py ipi/engine/simulation.py examples/lammps/isofsc-vapor/process_out.py examples/lammps/isofsc-vapor/process_dif.py ipi/engine/motion/phonons.py tools/py/xyz2bin.py ipi/utils/io/backends/io_pdb.py ipi/inputs/motion/alchemy.py tools/py/gleacf.py tools/py/potential_energy_ppi.py doc/src/conf.py examples/lammps/h2o-planetary-64/estmod_dipole.py ipi/utils/inputvalue.py ipi/utils/__init__.py ipi/inputs/motion/phonons.py ipi_tests/unit_tests/engine/test_properties.py ipi_tests/__init__.py tools/py/fixcom.py ipi_tests/unit_tests/utils/__init__.py examples/ASEClient/aims_double_server/run-ase.py tools/py/posforce2kinetic.py tools/py/tests.py drivers/py/pes/dummy.py ipi/engine/motion/scphonons.py tools/py/trimsim.py tools/py/estmod_example.py tools/py/energy_ppi.py ipi/engine/motion/instanton.py ipi/external/importlib/__init__.py examples/ASEClient/aims/run-ase.py ipi_tests/test_tools.py ipi/utils/mintools.py ipi/engine/normalmodes.py setup.py ipi_tests/unit_tests/utils/io/__init__.py ipi/utils/io/backends/io_binary.py ipi/engine/motion/multi.py tools/py/bin2xyz.py ipi/external/importlib/bundledimportlib.py ipi_tests/unit_tests/common/xyz_generator.py tools/py/pepper.py ipi/utils/depend.py ipi_tests/unit_tests/utils/test_contraction.py ipi/interfaces/sockets.py ipi/engine/smotion/remd.py ipi/engine/smotion/__init__.py ipi/engine/motion/replay.py tools/py/heat_capacity_ppi.py tools/py/get_np_rad.py tools/py/get_np_vec.py drivers/py/driver.py tools/py/get_Ascp.py tools/py/planetary.py examples/lammps/isof-vapor/process_out.py ipi/engine/initializer.py ipi/utils/io/backends/io_json.py tools/py/remdsort.py drivers/py/pes/__init__.py ipi/engine/outputs.py ipi/inputs/motion/atomswap.py ipi/inputs/smotion/remd.py ipi/utils/io/__init__.py examples/lammps/isof-water/process_out.py ipi/engine/motion/planetary.py ipi/inputs/motion/planetary.py ipi/inputs/system.py tools/py/getproperty.py ipi_tests/unit_tests/engine/pdb_generator.py ipi/utils/softexit.py ipi/utils/hesstools.py ipi/inputs/motion/__init__.py ipi/inputs/motion/dynamics.py ipi/utils/prng.py tools/py/total_energy_ppi.py ipi/utils/constrtools.py ipi_tests/unit_tests/engine/test_initializer.py ipi/utils/io/backends/io_xyz.py ipi/engine/motion/motion.py examples/hswfqmc/FixWF_1Bead/intau_plot.py ipi/inputs/motion/motion.py ipi/inputs/smotion/smotion.py ipi/inputs/forcefields.py ipi/utils/io/inputs/io_xml.py examples/hswfqmc/WFOpt_2Bead/intau_plot.py ipi/inputs/motion/neb.py drivers/py/pes/rascal.py ipi/engine/atoms.py examples/lammps/isofsc-water/process_out.py tools/py/mux-positions.py ipi/engine/motion/vscf.py ipi_tests/unit_tests/utils/test_units.py tools/py/Instanton_postproc.py ipi/engine/barostats.py ipi/inputs/motion/ramp.py ipi/inputs/smotion/metad.py tools/py/a2b.py ipi/engine/beads.py doc/latex/help_list.py ipi/engine/smotion/smotion.py ipi/utils/io/inputs/__init__.py ipi/utils/nmtransform.py ipi_tests/examples/test_examples.py ipi/inputs/motion/al6xxx_kmc.py ipi_tests/unit_tests/utils/io/backends/__init__.py ipi/utils/phonontools.py tools/py/committee-reweight.py ipi_tests/examples/exampletools.py ipi/inputs/normalmodes.py ipi/inputs/simulation.py ipi/inputs/__init__.py ipi/engine/motion/neb.py ipi/engine/thermostats.py ipi_tests/unit_tests/utils/test_depend2.py ipi/utils/io/io_units.py ipi/utils/io/backends/__init__.py tools/py/xyz2pdb.py examples/ASEClient/CP2K/run-ase.py ipi/inputs/cell.py ipi/utils/messages.py ipi/inputs/interface.py ipi/engine/__init__.py ipi/inputs/motion/constrained_dynamics.py ipi_tests/unit_tests/common/__init__.py ipi/utils/decorators.py ipi_tests/unit_tests/common/folder.py doc/latex/create_man.py ipi/engine/smotion/metad.py tools/py/Instanton_interpolation.py ipi/inputs/ensembles.py ipi/inputs/smotion/__init__.py ipi/utils/exchange.py ipi/engine/motion/__init__.py ipi/engine/properties.py ipi/engine/motion/constrained_dynamics.py ipi/utils/distance.py tools/py/effective_temperatures.py ipi_tests/unit_tests/utils/test_io.py ipi/engine/smotion/multi.py ipi_tests/unit_tests/engine/test_atoms.py ipi/engine/smotion/dmd.py ipi_tests/regression_tests/test_run.py tools/py/regtest-parallel.py ipi/inputs/motion/geop.py ipi/engine/motion/alchemy.py ipi/inputs/prng.py ipi/interfaces/__init__.py tools/py/style.py ipi/inputs/outputs.py ipi_tests/regression_tests/runstools.py ipi/engine/system.py examples/yaff/mil53_ffsocket/run.py tools/py/rdf_ppi.py ipi/engine/forcefields.py ipi/inputs/motion/instanton.py ipi/engine/motion/ramp.py tools/py/contract-trajectory.py ipi/inputs/motion/scphonons.py doc/latex/help.py ipi/utils/sparse.py ipi/inputs/beads.py ipi/engine/cell.py ipi/utils/mathtools.py ipi/utils/units.py ipi/engine/forces.py tools/py/energies_ppi.py ipi/inputs/smotion/dmd.py ipi_tests/unit_tests/doc/test_docs.py tools/py/kinetic_energy_ppi.py ipi/inputs/thermostats.py ipi/inputs/forces.py ipi_tests/unit_tests/utils/test_depend.py examples/yaff/mil53_ffsocket/yaffdriver.py ipi/engine/ensembles.py ipi/utils/instools.py ipi_tests/unit_tests/utils/io/backends/test_io_xyz.py ipi/inputs/initializer.py tools/py/get_np_xyz.py ipi/engine/motion/dynamics.py ipi/inputs/atoms.py ipi_tests/unit_tests/utils/io/backends/test_io_units.py ipi/inputs/barostats.py tools/py/get_np.py drivers/py/pes/harmonic.py ipi/inputs/motion/vscf.py ipi/engine/motion/al6xxx_kmc.py ipi_tests/unit_tests/utils/io/backends/test__init__.py help help_list recv_data Message run_driver send_data Dummy_driver Harmonic_driver Rascal_driver Bfunc Afunc0 YAFFDriver MySocket Atoms Atom Barostat BaroSCBZP BaroBZP BaroMTK BaroRGB mask_from_fix Beads Cell ensemble_swap Ensemble ForceRequest FFYaff FFPlumed FFSocket FFDebye ForceField FFLennardJones NumpyEncoder FFdmd FFCommittee FFsGDML Forces ForceComponent ForceBead ScaledForceComponent init_beads InitIndexed InitBase init_file Initializer set_vector init_vector init_chk InitFile NormalModes BaseOutput PropertyOutput OutputList OutputMaker CheckpointOutput TrajectoryOutput Trajectories getall getkey help_latex Properties Simulation System ThermoNMGLE ThermoNMGLEG ThermoFFL ThermoCL hfunc ThermoPILE_L ThermoPILE_G ThermoGLE ThermoSVR ThermoLangevin Thermostat MultiThermo AlKMC AlchemyMC AtomSwap ConstrainedIntegrator NVEConstrainedIntegrator ConstraintSolver ConstrainedDynamics ConstraintSolverBase NVTConstrainedIntegrator NVEIntegrator NVTIntegrator SCNPTIntegrator NVTCCIntegrator DummyIntegrator SCIntegrator Dynamics NPTIntegrator NSTIntegrator CGOptimizer LBFGSOptimizer GeopMotion GradientMapper BFGSTRMOptimizer BFGSOptimizer DummyOptimizer SDOptimizer LineMapper LanczosOptimizer LBFGSOptimizer SpringMapper GradientMapper NicholsOptimizer Fix HessianOptimizer DummyOptimizer FullMapper NROptimizer InstantonMotion Motion MultiMotion NEBMover NEBLineMover NEBBFGSMover NMFDPhononCalculator DynMatrixMover ENMFDPhononCalculator DummyPhononCalculator FDPhononCalculator Planetary PressureRamp TemperatureRamp Replay SCPhononsMover SCPhononator DummyPhononator NormalModeMover IMF VSCF DummyCalculator DMD MetaDyn MultiSmotion motion_scale thermo_scale ReplicaExchange gle_scale Smotion _resolve_name import_module InputAtoms InputBaro InputBeads InputCell InputEnsemble InputFFDebye InputForceField InputFFLennardJones InputFFYaff InputFFSocket InputFFPlumed InputFFdmd InputFFCommittee InputFFsGDML InputForces InputForceComponent InputInitIndexed InputInitPositions InputInitFile InputInitLabels InputInitCell InputInitializer InputInitThermo InputInitMomenta InputInitVelocities InputInitBase InputInitMasses InputInterfaceSocket InputNMFrequencies InputNormalModes InputCheckpoint InputOutputs InputTrajectory InputProperties InputRandom InputSimulation InputSystem InputSysTemplate InputThermo InputThermoBase InputAlKMC InputAlchemy InputAtomSwap InputConstrainedDynamics InputConstraint InputConstraintSolver InputConstraintBase InputDynamics InputGeop InputInst InputMotionBase InputMotion InputNEB InputDynMatrix InputPlanetary InputTemperatureRamp InputPressureRamp InputSCPhonons InputNormalMode InputDMD InputMetaDyn InputReplicaExchange InputSmotion InputSmotionBase Disconnected InvalidSize DriverSocket InvalidStatus Status InterfaceSocket Message Driver EckartConstraint ConstraintBase AngleConstraint ConstraintList RigidBondConstraint ValueConstraintBase cached dd depend_array dcopy dpipe dstrip depraise synchronizer depend_base ddirect depend_value dobject dep_dot vector_separation Evaluate_dVB Evaluate_dEkn_on_atom Evaluate_VB Evaluate_EkN get_dynmat clean_hessian get_hessian InputValue InputArray InputRaw InputDictionary input_default InputAttribute Input ms_pathway invmul_banded print_instanton_geo banded_hessian diag_banded get_imvector print_instanton_hess red2comp sym_band exp_ut3x3 genh2abc root_herm mat_taylor h2abc_deg logsumlog _sinch gaussian_inv eigensystem_ut3x3 det_ut3x3 h2abc abc2h stab_cholesky invert_ut3x3 matrix_exp info warning banner Verbosity bracket_neb min_approx BFGS L_BFGS nichols Powell BFGSTRM min_trm min_brent min_brent_neb L_BFGS_nls bracket TRM_UPDATE o_nm_eva mk_nm_matrix nm_rescale mk_rs_matrix mk_o_nm_matrix nm_fft mk_o_rs_matrix nm_trans nm_noop nm_eva apply_asr Random Softexit csr_matrix sparse_matrix load_npz save_npz csc_matrix unit_to_internal Constants Elements unit_to_user process_units auto_units print_file_raw netstring_encoded_savez read_file_raw print_file iter_file iter_file_name read_file_name iter_file_name_raw read_file iter_file_raw open_backup print_file_path_raw _get_io_function netstring_encoded_loadz print_file_path read_binary print_binary read_json print_json_path iter_json print_json read_pdb print_pdb print_pdb_path print_xyz read_xyz print_xyz_path xml_handler xml_write xml_parse_file read_type write_bool write_dict write_list xml_parse_string read_array read_dict write_tuple read_int xml_node read_bool read_list read_tuple write_float write_type read_float modify_xml_4_dummy_test clean_tmp_dir get_test_list Runner get_test_settings Runner_examples find_examples test_example Runner_regression test_regtest local xyz_rand xyz_traj_filedesc xyz_traj distclean run_command test_make clean test_make_aux xyz_rand xyz_traj_filedesc xyz_traj test_names get_atoms create_xyz_sample_file test_init_file create_a_fake_system_obj test_Trajectories_print_traj prepare_Trajectories_print_traj check_up_and_down_scaling check_centroid_pos test_contraction check_rpc_consistency threadA A B threadB test_dotf test_slicing test_addition test_dot test_readonly test_increment test_iter_xyz2 test_read_pdb2 test_read_xyz2 test_print_pdb2 test_iter_pdb2 test_iter_pdb test_print_xyz test_read_pdb test_iter_xyz test_read_xyz test_print_pdb test_print_xyz2 test_case_insensitive test_process_units_noobj units_preparation test_process_units_object create_random_xyz_traj_to_write test_read_xyz create_random_xyz_traj_to_read test_iter_xyz test_print_xyz test_read_file prepare_read_file gleKernel Dqp Cqp gleacf input_facf ISRA output_facf Aqp main get_key_dim_units get_cell_units main uncertainty_CEA_multiple_models CEA direct_reweight commitee_reweight contract_trajectory main effectiveTemperatures extractUnits main read_U energies extractUnits main totalEnergy read_U Afunc2 Bfunc corr Afunc0 Afunc1 main compute_acf main get_A enablePrint blockPrint kernel histo get_np r2_K dhist_by_dr r2_hist_K r2_dK get_np histo3d_der histo3d get_np outer3 kernel histo histo_der get_np extractUnits main heatCapacity read_U spring_pot Filter get_rp_freq get_double main extractUnits main read_time kineticEnergy main main unwrap_positions wrap_positions main Planets simple_corr main getInput main extractUnits main read_U potentialEnergy main RDF answer_is_y _parser _build_test_index create_dir _file_is_test remove_file inplace_change main Test parse_regtest_string main main extractUnits main totalEnergy read_U main main main help_latex write help_xml open property_dict write help_latex traj_dict open itemsize byte size isscalar recv_into zeros dtype isscalar send tobytes array float64 resize AF_UNIX driver socket connect AF_INET sendall encode recv_data SOCK_STREAM recv chararray print int32 zeros Message send_data len shape range zeros reshape dict ones stressext copy read_file info append low open str parse fetch InputSimulation len cell motion syslist xml_parse_file warning beads open value init_file len Beads natoms range mode value p mode q len nm_rescale shape info b1tob2 low len find tuple strip len read_tuple find sorted replace hasattr tlist hasattr thermo_scale thermostat mlist motion_scale motion rindex range len _resolve_name __import__ startswith attribs copy update attribs fields update attribs update attribs fields update attribs fields update attribs fields update attribs fields update attribs fields fields copy deepcopy InitBase attribs deepcopy attribs deepcopy InitFile attribs deepcopy attribs InitIndexed deepcopy attribs append deepcopy attribs deepcopy attribs deepcopy attribs deepcopy attribs deepcopy attribs attribs copy attribs copy append attribs copy append attribs copy append attribs copy append attribs copy dstrip ndarray view isinstance add_dependency add_synchro _dependants hasattr _bval _tainted _func _synchro dot T norm bosons copy dot zeros omegan2 range nbeads enumerate bosons copy zeros omegan2 nbeads enumerate temp exp bosons Evaluate_EkN log zeros kb range nbeads len temp exp bosons Evaluate_dEkn_on_atom zeros kb range nbeads len print multiply reshape shape zeros range extract mod delete pi sign flatten eigh low list multiply transpose sum range high size eig tile info T norm medium reshape absolute dot eye zeros str remove loadtxt gm close copy flatten savetxt info open zeros low range len attribs copy omega2 concatenate ones range coef shape flatten zeros natoms diag nbeads empty range len high len sym_band info high eig_banded info high reshape info zeros range T norm high multiply outer sign absolute eigh pi info low str get_output print h2abc_deg h close_stream range unit_to_user str get_output reshape size close_stream savetxt list norm append zeros sum range exp log identity copy dot zeros range str T dot shape sqrt warning zeros low range dot sqrt float acos dot sqrt acos h2abc zeros cos sqrt sin zeros range zeros range zeros exp str T dot eigh sqrt warning zeros low range len matrix_power format warning identity log print print print trace print_stack copysign fdf debug info abs max copysign fdf debug low info abs bracket fdf debug multiply ones divide absolute maximum flatten sqrt dot add info zeros low max amax len min_approx debug subtract reshape flatten dot sqrt shape info max len norm subtract min_trm flatten dot mapper TRM_UPDATE dot T T print reshape size min dot shape eigh amin zeros sum max range min_approx print debug subtract reshape roll flatten dot sqrt shape info zeros max range len copysign fdf debug info abs max abs bracket_neb fdf debug print subtract add flatten sqrt dot roll info zeros range len T reshape size multiply absolute dot shape dot cos pi sqrt sin zeros float range cos pi sqrt zeros float range T mk_nm_matrix dot zeros range T mk_o_nm_matrix dot zeros range norm mod T natoms reshape transpose dot sqrt tile eye m zeros sum range property load loader ja savez kind m a savez_compressed saver ia n match group group search split auto_units high Cell info Atoms import_module getattr unit_to_user print_file_raw unit_to_user _get_io_function reader read_file_raw _get_io_function reader iter_file_raw format isfile rename startswith info low str savez write getvalue StringIO savez_compressed len load int read close files StringIO join asarray tofile names len join split fromfile zeros len h2abc_deg tolist write h dstrip dumps names natoms q readline asarray strip loads zeros abc2h len h2abc_deg write h dstrip q names natoms range nbeads h2abc_deg write h dstrip names natoms range q readline strip mass copy append float abc2h h2abc_deg write h dstrip q names natoms range nbeads h2abc_deg write h dstrip names natoms range q int abc2h inv dot resize zeros next array enumerate split xml_handler parseString xml_handler parse items fields strip index split range len len range read_list read_type read_list list map read_list split rstrip strip append list Path isfile glob remove append list range len str list parse items SubElement write extend open getroot find findall keys append enumerate append list Path isfile time format print Runner_examples index Path run Runner_regression time format print index Path run str random range xyz_rand range concatenate NamedTemporaryFile write seek xyz_traj chdir getcwd close call devnull split open run_command run_command run_command run_command names get_atoms enumerate read param addfinalizer Cell write close xyz_traj_filedesc mass NamedTemporaryFile append zeros Atoms range enumerate names name init_file m h assert_array_almost_equal assert_array_equal q enumerate Mock create_a_fake_system_obj param xyz_traj reshape copy NamedTemporaryFile compile Trajectories bind patch h assert_equal names print_traj assert_almost_equal q nm_rescale print b2tob1 shape b1tob2 assert_almost_equal nm_rescale b1tob2 b2tob1 assert_almost_equal dot mk_rs_matrix check_up_and_down_scaling assert_almost_equal check_up_and_down_scaling check_centroid_pos check_rpc_consistency sqrt range sleep print sqrt range sleep print type print type print zeros type dot type print dot zeros Atoms unlink local unlink local unlink local unlink local unit_to_internal process_units assert_array_equal print mass pi xyz_traj_filedesc copy type assert_array_almost_equal append abc2h array process_units assert_array_equal names print m mass pi xyz_traj_filedesc copy h assert_array_almost_equal append abc2h array q param seek reshape inv xyz_traj_filedesc dot match array range read_xyz assert_array_almost_equal assert_array_equal array range assert_array_equal iter_xyz array assert_array_almost_equal param seek h2abc_deg Cell xyz_traj_filedesc append Atoms range read remove name close write print_xyz NamedTemporaryFile zip flush param copy xyz_traj_filedesc concatenate flatten read_file assert_almost_equal array genfromtxt len T savetxt zeros asarray shape zeros asarray shape T eig inv copy dot real range len Dqp print eig inv Cqp copy dot sqrt real zeros abs max Aqp len int T asarray str dot savetxt log10 rjust range gleKernel xml_parse_file ISRA open str A fetch C output_facf temp parse InputSimulation close input_facf float __name__ T print loadtxt reshape dot eye nbeads len search group split read_file_raw print_file close get_key_dim_units read_file get_cell_units open array_pbc q T exp expand_dims sum array range T mean zeros expand_dims range mean var CEA temp stdout CEA T direct_reweight format print loadtxt uncertainty_CEA_multiple_models sqrt load_from_xml savetxt float abs flush format print nm_rescale unit_to_internal Cell print_file close any vstack auto_units b1tob2 Atoms range enumerate len int sorted format glob print unit_to_internal write q close unit_to_user open zeros float range kb hbar flush len effectiveTemperatures f2divm findcentroidvirialkineticenergy unit_to_user open sorted array extractUnits append range q format findcoupling glob unit_to_internal close read_U zip float hbar flush int print write zeros kb len readline seek print tell search exit append compile float readline unit_to_internal split energies int sorted glob print unit_to_internal len write extractUnits unit_to_user read_U read_file append zeros float kb range q open int find strip totalEnergy shape range zeros shape range zeros shape range zeros zeros range str sorted int names glob zeros floor append natoms Atoms range log len hamming rfft read_file_raw gradient pi conjugate open str list hfft ones savetxt real bartlett append sum range asarray unit_to_internal close copy mean sqrt hanning float int logical_or split blackman zeros len readline print search group warning split compile devnull open __stdout__ batch_weight_exponent xml_parse_file log open max_iter str exp fetch prefix append sinh sum range temp parse InputSimulation close sqrt cosh blockPrint float T print enablePrint loadtxt dot asr nan_to_num len pi xml_parse_file linspace abs open fft fetch savetxt append histo sum range temp parse asarray fftshift concatenate InputSimulation close mean sqrt float int print loadtxt dot zeros std nbeads exp sqrt exp pi zeros norm shape norm asarray copy dot shape zeros range len cumsum cos max str dhist_by_dr exit shape sin r2_hist_K len exp zeros abs int int exp asarray zeros abs range len hxyz flatten histo3d meshgrid clock RegularGridInterpolator histo3d_der int asarray copy sum range len int range len histo_der copy f2divm open sorted array extractUnits append range q format findcoupling glob unit_to_internal close read_U zip float hbar flush int print write zeros kb len heatCapacity zeros range concatenate range append delete len print size exit pi sqrt sin append range write dstrip eigh abs f2divm findcentroidvirialkineticenergy unit_to_user open sorted read_time array extractUnits append range format findcoupling glob unit_to_internal zip float hbar flush int print write read_file zeros kb len float readline split kineticEnergy print_file_path Beads float process_units flush format wrap_positions copy unwrap_positions open_backup auto_units next enumerate T inv h copy dot flatten T inv h copy dot flatten outtemplate xml_parse_file banner fetch syslist prefix filename parse unit_to_internal InputSimulation temp_list rsplit join format sorted items insert print add_argument estmods import_module warning ArgumentParser append parse_args flush shape range arange zeros Planets simulation qc zip f2divm unit_to_user open sorted array extractUnits append range format glob unit_to_internal read_U float hbar flush int print write read_file zeros kb len potentialEnergy f2divm unit_to_user sorted h savetxt append range q format glob unit_to_internal copy zip get_ih float hbar flush int print kb get_volume read_file updateqrdf zeros array len RDF join _build_test_index exit put create_dir task_done get_nowait Test add_argument ArgumentParser join relpath print write _file_is_test abspath append walk print remove remove answer_is_y isdir rmtree mkdir isfile exists input write join MULTILINE findall finditer compile swapfile list mlist deepcopy nbeads decode Popen f2divm findcentroidvirialkineticenergy array format findcoupling zip hbar flush isfile makedirs | i-PI: a Universal Force Engine ============================== A Python interface for ab initio path integral molecular dynamics simulations. i-PI is composed of a Python server (i-pi itself, that does not need to be compiled but only requires a relatively recent version of Python and Numpy) that apply an algorithm that updates the positions of the nuclei, and of an external code that acts as a client and computes the electronic energy and forces. This is typically a patched version of an electronic structure code, but a simple self-contained Fortran driver that implements Lennard-Jones and Silvera-Goldman potentials is included for test purposes. | 2,357 |
i-samenko/Triplet-net | ['word embeddings'] | ['Intuitive Contrasting Map for Antonym Embeddings'] | trainer.py dataset.py models.py utils.py quality_utils.py TripletDataset IntracranialDataset TripletNet_v3 TripletCosLoss SiameseNet ClassificationeNet_v3 TripletNet TripletLoss ClassificationNet ClassificationeNet_v2 EmbeddingNet TripletNet_v2 get_scores print_score plot_dev_destrib get_distance_distrib get_embedding_for_test_task fit convert_data_to_xy to_categorical plotter num_correct_samples show_raw_distrib accuracy_score print mean cross_val_score print StratifiedKFold LogisticRegression XGBClassifier get_scores state_dict append model to backward zero_grad is_triplet tqdm plotter eval load_state_dict loss_fn train step range cat len eval array concatenate show hist get_distance_distrib legend hstack tqdm append numpy array append hstack tqdm zeros data max item numpy max show clear_output format plot print grid title figure legend array hist array legend | i-samenko/Triplet-net | 2,358 |
i6092467/GVAR | ['time series'] | ['Interpretable Models for Granger Causality Using Self-explaining Neural Networks'] | bin/run_grid_search.py datasets/linear_examples.py datasets/fMRI/fmri.py plotting_utils.py experimental_utils.py datasets/lotkaVolterra/lotka_volterra.py models/linvar.py utils.py datasets/lorenz.py datasets/lotkaVolterra/multiple_lotka_volterra.py training.py models/senn.py run_grid_search plot_causal_structure_signed plot_causal_structure set_axis_style visualise_gen_coeffs_lotka_volterra plot_lambda_gamma_grid_signed plot_stability visualise_gen_coeffs_linear_var plot_lambda_gamma_grid plotting_setup plot_lambda_gamma_grid_binary training_procedure_trgc run_epoch training_procedure training_procedure_stable absolute_mean_relative_deviation kl_div_normal absolute_mean_deviation eval_causal_structure eval_concordance eval_causal_structure_binary construct_training_dataset kl_div_disc generate_linear_example_1 Lorenz96 get_fmri_simulation_ get_ts get_fmri_simulation get_connectivity print_ground_truth_connectivity next f simulate_lotka_volterra MultiLotkaVolterra LinVAR SENNGC str time eval_causal_structure_binary print today mean training_procedure_trgc savetxt mkdir eval_causal_structure append zeros round std range len add_subplot xticks abs log yticks show set_xlabel ylabel pcolormesh legend Inf get_position fill_diagonal set_position xlabel set_yticks set_xticks set_ylabel figure plotting_setup get_position show fill_diagonal set_position set_yticks set_xlabel add_subplot pcolormesh set_xticks set_ylabel figure legend plotting_setup Inf arange set_xticklabels set_xlim set_xticks set_ticks_position set_tick_params len rc use show join subplots xlabel loadtxt minorticks_on AutoMinorLocator grid ylabel colorbar imshow title set_minor_locator savefig plotting_setup xticks yticks show join subplots xlabel loadtxt minorticks_on AutoMinorLocator grid ylabel colorbar imshow title set_minor_locator savefig plotting_setup xticks yticks show join subplots xlabel loadtxt minorticks_on AutoMinorLocator grid ylabel colorbar imshow title set_minor_locator savefig plotting_setup xticks yticks show plot xlabel ylabel figure legend plotting_setup show arange plot xlabel ylabel set_linewidth axhline get_lines legend plotting_setup range show plot xlabel ylabel set_linewidth axhline get_lines legend plotting_setup range str norm arange criterion model backward print zero_grad shuffle mean to step range len seed SENNGC isinstance manual_seed print Adam run_epoch MSELoss device to construct_training_dataset range show str plot print xlabel training_procedure ylabel balanced_accuracy_score linspace quantile zeros range len str max print transpose training_procedure balanced_accuracy_score plot_stability linspace quantile zeros round flip range len arange concatenate astype zeros max range len roc_auc_score average_precision_score recall_score balanced_accuracy_score flatten precision_score accuracy_score flatten range len sum histogram mean std log seed normal concatenate sign uniform append zeros array range reshape expand_dims transpose print range str get_ts transpose squeeze get_connectivity loadmat load seed arange uniform zeros next array range normal f | ## Interpretable Models for Granger Causality Using Self-explaining Neural Networks ### Introduction Exploratory analysis of time series data can yield a better understanding of complex dynamical systems. Granger causality is a practical framework for analysing interactions in sequential data, applied in a wide range of domains. We propose a novel framework for inferring multivariate Granger causality under nonlinear dynamics based on an extension of self-explaining neural networks. This framework is more interpretable than other neural-network-based techniques for inferring Granger causality, since in addition to relational inference, it also allows detecting signs of Granger-causal effects and inspecting their variability over time. In comprehensive experiments on simulated data, we show that our framework performs on par with several powerful baseline methods at inferring Granger causality and that it achieves better performance at inferring interaction signs. The results suggest that our framework is a viable and more interpretable alternative to sparse-input neural networks for inferring Granger causality. *Relational inference in time series*: <p align="center"> <img align="middle" src="https://github.com/i6092467/GVAR/blob/master/images/scheme_panel_1.png" alt="relational inference" width="500"/> </p> *In addition to structure, our approach allows inferring Granger-causal effect signs*: <p align="center"> <img align="middle" src="https://github.com/i6092467/GVAR/blob/master/images/scheme_panel_2.png" alt="interpretable relational inference" width="500"/> | 2,359 |
iN1k1/deep-pyramidal-representations-person-re-identification | ['person re identification'] | ['Aggregating Deep Pyramidal Representations for Person Re-Idenfitication'] | src/ml/net/pt/utils.py src/datamanager/transformer.py src/ml/net/pt/models.py src/datamanager/__init__.py src/features/deep.py src/configs/base.py src/utils/misc.py src/datamanager/dataset.py src/ml/metric/standard.py src/ml/net/pt/factory.py src/reid/reranking/k_reciprocal.py src/configs/__init__.py src/pyrnet/sampler.py src/configs/general.py src/__init__.py src/reid/feature_extractor.py src/ml/net/pt/resnet.py src/utils/__init__.py src/ml/net/pynet.py src/pyrnet/model.py src/configs/dataset.py src/pyrnet/features.py src/results/performance.py src/ml/net/__init__.py src/pyrnet/test.py src/results/reid.py src/ml/net/pt/spp.py src/ml/metric/utils.py src/visualization/visualizer.py src/datamanager/dataprovider.py src/pyrnet/metric.py src/ml/metric/dissimilarity.py src/ml/metric/metric.py src/reid/visualizer.py src/results/ranking_metrics.py src/ml/metric/__init__.py main.py src/results/__init__.py src/reid/__init__.py src/ml/net/pt/dense.py src/features/features.py src/ml/net/nmnet.py src/datamanager/datasetreid.py src/datamanager/utils.py src/ml/__init__.py src/features/__init__.py src/ml/net/pt/__init__.py train load save BaseConfig DatasetConfig GeneralConfig DataProvider Dataset DatasetReID ToNumpy DataTransformer RandomErasing is_image_file _get_triplet make_dataset_tensor find_classes load_npy copy_image compute_pairs_cams make_dataset_images is_tensor_file load_reid_dataset pil2np make_contiguous_targets compute_triplets load_image get_dataset DeepFeatures FeatureExtractor pytorch_pairwise_euclidean pairwise pytorch_pairwise_euclidean2 _square_distance DistanceMetric StandardMetric matirxscore2pairscore AverageMeter NMNet NetOpts DispOpts OptimOpts PyNet get_densenet_backbone DenseNet make_it_parallel PTModel ResNet get_resnet_backbone SpatialPyramidPooling init_weights_orthogonal_model init_weights_classifier_model accuracy init_weights_classifier_module cast init_weights_normal_module init_weights_normal_model init_weights_module_kaiming init_weights_model_kaiming init_weights_orthogonal_module get_features _fuse_dissimilarities _re_rank get_distance get_loss accuracy DenseNetReID TripleNet get_model TripletReIDLoss HardTripletSampler get_args evaluate FeatureExtractor display_dataset_strip k_reciprocal_neigh re_ranking multi_label_performance nAUC mean_average_precision get_matching_images average_precision precision_recall ndcg_at_k dcg_at_k mean_average_precision precision_at_k average_precision mean_reciprocal_rank r_precision ProbePerformance ReIDPerformance load create_list_of_dictionaries clone save _plot_with_range display_image display_ranked_matching_images _save_fig plot_cmc _draw_rectangle batch_size set_data_providers depth DatasetReID dataset RandomErasing exp_folder list name HardTripletSampler strftime map DatasetConfig append update format get_loss DataTransformer reda Normalize classes OptimOpts DispOpts alpha net join get_num_parameters make_it_parallel print init_meters_and_plots PyNet makedirs accuracy parameters NetOpts split increase_to_margin get_model epochs to_gpu len sort sort unique is_image_file join sorted sort append listdir walk join sorted sort is_tensor_file append listdir walk join int list sort chain append load_image listdir exists split int height resize convert BICUBIC width float load int height resize copy BICUBIC width float list product concatenate set append uint32 array extend append list product set join name save DatasetReID split cdist pytorch_pairwise_euclidean array norm inf clamp transpose matmul copy from_numpy sqrt view size copy from_numpy sqrt split expand_as zeros enumerate expand_as list product index append range len densenet161 densenet169 densenet201 Sequential add_module densenet121 print DataParallel resnet50 Sequential add_module resnet34 resnet18 resnet152 resnet101 init_weights_module_kaiming modules data isinstance Conv2d normal_ kaiming_normal_ constant_ Linear init_weights_classifier_module modules data isinstance normal_ constant_ Linear modules init_weights_normal_module data isinstance Conv2d bias normal_ constant_ Linear modules init_weights_orthogonal_module data isinstance orthogonal_ constant_ Linear isinstance topk size t eq mul_ expand_as append sum max items time format print append normalize DeepFeatures extract_data_provider time gallery get_items_from_indexes format print pairwise_distance _re_rank DistanceMetric reshape indexes _fuse_dissimilarities mean probe append zeros array range len sum prod min array pairwise_distance tolist astype zeros float array range len format print DenseNetReID TripleNet load_from_checkpoint_path get_features matching_indexes depth DatasetReID dataset DataProvider get_distance cmc name metric DatasetConfig dirname get_matching_images epoch format display_ranked_matching_images DataTransformer size ReIDPerformance Normalize classes plot_cmc vars net checkpoint enumerate compute join make_it_parallel print PyNet extend split get_model to_gpu matching_ids len extend get_dataset DatasetConfig get_indexes_from_id_cam DataProvider zeros_like around max list exp transpose append sum range concatenate astype mean unique minimum int float32 argpartition k_reciprocal_neigh zeros len cumsum copy nan_to_num mean any zeros sum enumerate len precision_recall average_precision_score dict precision_recall_curve ravel range append list range len size sorted dcg_at_k zip fromarray uint8 cm astype savefig dirname abspath get_cmap save MinMaxScaler fit_transform makedirs show uint8 min astype axis close imshow set_visible figure _save_fig gca max update list plt_fun plot grid semilogx drop DataFrame range enumerate len _plot_with_range subplot show use set_fontsize get_yticklabels slice axes get_xticklabels close figure legend _save_fig get_cmap len axis GridSpec paste set_visible resize show list set_title new tolist imshow _save_fig gca range close _draw_rectangle enumerate figure len rectangle range Draw | # PyrNet This repo contains the source codes implemented to run the experiments for **person re-identification** used within the paper: ["*Aggregating Deep Pyramidal Representations for Person Re-Idenfitication*"](http://openaccess.thecvf.com/content_CVPRW_2019/papers/TRMTMCT/Martinel_Aggregating_Deep_Pyramidal_Representations_for_Person_Re-Identification_CVPRW_2019_paper.pdf), published in International Conference on Computer Vision and Pattern Recognition - Workshop on Target Re-identification and Multi-Target Multi-Camera Tracking, 2019. [](https://paperswithcode.com/sota/person-re-identification-on-dukemtmc-reid?p=aggregating-deep-pyramidal-representations) # Data The repository does not contain the datasets. You can download a copy of the Market-1501 dataset from here: [Datasets](https://drive.google.com/file/d/1HfgS3HveeY74Jz5rnTIrKB1eH4AIGwHg/view?usp=sharing). Just extract the zip within the "data" folder. To make the scripts running with other datasets (e.g., Duke, CUHK, etc.), you can just copy the original files with the same "data" folder. # Usage The solution has been written using the PyTorch framework and tested with the version specified with the *requirements.txt* file. If you want, feel free to run `pip install -r requirements.txt` to get all the dependencies in place. | 2,360 |
iPRoBe-lab/D-NetPAD | ['iris recognition'] | ['D-NetPAD: An Explainable and Interpretable Iris Presentation Attack Detector'] | train_DNetPAD.py dataset_Loader.py test_DNetPAD.py Evaluation.py fineTune_DNetPAD.py datasetLoader evaluation | # D-NetPAD Code for Iris Presentation Attack Detection based on DenseNet Architecture. # Requirement Pytorch, Numpy, Scipy, Pillow # Description The D-NetPAD takes a cropped iris image as input and produces a PA score between 0 and 1, where 0 means bonafide and 1 means presentation attack. Sample cropped iris images are provided in CroppedImages folder. <img src="https://github.com/sharmaGIT/D-NetPAD/blob/master/Images/Architecture.jpg" width="800" height="200"> # Testing The model can be downloaded from [here](https://drive.google.com/drive/folders/178o1ujoUb3b5HYi8_51b1r8XZ2wbEYc7?usp=sharing). Copy the model into the Model folder and run the following command: python test_D-NetPAD.py -imageFolder CroppedImages | 2,361 |
iamaaditya/pixel-deflection | ['adversarial attack'] | ['Deflecting Adversarial Attacks with Pixel Deflection'] | utils.py main.py methods.py process_image process_image_parallel get_arguments classify_images pixel_deflection denoiser get_img rgb2ycc batches ycc2rgb get_imagenet_labels get_map add_argument ArgumentParser preprocess_input join format decode_predictions print stack zip predict get_img window deflections get_map pixel_deflection zeros sigma format batches glob directory batch_size cpu_count print classify_images append shape range dot T array float array astype load_img Normalize load open | # Deflecting Adversarial Attacks with Pixel Deflection  Code for paper: https://arxiv.org/abs/1801.08926 Blog with demo: https://iamaaditya.github.io/2018/02/demo-for-pixel-deflection/ Requirements: 1. Keras 2.0+ (only used for classification - Pixel Deflection itself is deep learning platform independent) 2. Scipy 1.0+ (Older version of scipy wavelet transform does not have BayesShrink) ## Example | 2,362 |
iampuntre/slam18 | ['language acquisition'] | ['Context Based Approach for Second Language Acquisition'] | src/prepare_data.py src/eval.py src/utils.py src/train_model.py src/config.py Config compute_f1 evaluate_metrics test_metrics compute_avg_log_loss compute_auroc main compute_acc main load_and_compute LogisticRegressionInstance LogisticRegression InstanceData load_data main load_labels convert_to_bool load_context_json Config params_file test_key evaluate_metrics join print add_argument iterkeys load_labels output_predictions ArgumentParser append parse_args range len range len list sorted zip float sum range len print range len compute_f1 compute_auroc compute_acc compute_avg_log_loss evaluate_metrics print test_file train_file load_and_compute dev_file print predict_test_set LogisticRegression dirname PrettyTable dev_key use_dev load_context_json load_data train add_row makedirs dict print dict | # Context based Approach for Second Language Acquisition This project is the implementation of the system submitted to the SLAM 2018 (Second Language Acquisition Modeling 2018) shared task. This page gives instructions for replicating the results in our system. ## Table of Contents <!-- toc --> - [Table of Contents](#table-of-contents) - [Installation](#installation) - [Downloading Data](#downloading-data) - [Parameters for the Experiment](#parameters-for-the-experiment) - [Prepare Data for training](#prepare-data) | 2,363 |
iancovert/Neural-GC | ['sparse learning', 'time series'] | ['A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems', 'Neural Granger Causality'] | models/__init__.py models/clstm.py models/cmlp.py models/crnn.py synthetic.py models/model_helper.py lorenz simulate_var simulate_lorenz_96 make_var_stationary cLSTMSparse ridge_regularize regularize cLSTM train_model_adam prox_update arrange_input LSTM train_unregularized restore_parameters train_model_ista train_model_gista cMLPSparse ridge_regularize MLP regularize train_unregularized train_model_adam prox_update cMLP restore_parameters train_model_ista train_model_gista cRNN RNN ridge_regularize cRNNSparse regularize train_model_adam prox_update arrange_input train_unregularized restore_parameters train_model_ista train_model_gista activation_helper hstack vstack eigvals abs max seed int normal hstack choice dot flatten eye make_var_stationary zeros range zeros range len seed normal odeint linspace zeros range flatten_parameters clamp weight_ih_l0 norm weight_ih_l0 parameters zip zeros range len zero_grad list p net_copy MSELoss append range cat ridge_regularize grad mean zip float net pop deepcopy backward print prox_update parameters loss_fn deepcopy list inf backward print step zero_grad Adam MSELoss parameters zip append sum restore_parameters range cat detach zero_grad list MSELoss append sum restore_parameters range cat detach inf mean zip float networks deepcopy backward print prox_update parameters clstm deepcopy list inf backward print step zero_grad Adam MSELoss parameters zip append sum restore_parameters range cat detach shape weight range shape weight lag lag grad lag lag cmlp crnn Tanh LeakyReLU Sigmoid ReLU | # Neural Granger Causality The `Neural-GC` repository contains code for a deep learning-based approach to discovering Granger causality networks in multivariate time series. The methods implemented here are described in [this paper](https://arxiv.org/abs/1802.05842). ## Installation To install the code, please clone the repository. All you need is `Python 3`, `PyTorch (>= 0.4.0)`, `numpy` and `scipy`. ## Usage See examples of how to apply our approach in the notebooks `cmlp_lagged_var_demo.ipynb`, `clstm_lorenz_demo.ipynb`, and `crnn_lorenz_demo.ipynb`. ## How it works The models implemented in this repository, called the cMLP, cLSTM and cRNN, are neural networks that model multivariate time series by forecasting each time series separately. During training, sparse penalties on the input layer's weight matrix set groups of parameters to zero, which can be interpreted as discovering Granger non-causality. The cMLP model can be trained with three different penalties: group lasso, group sparse group lasso, and hierarchical. The cLSTM and cRNN models both use a group lasso penalty, and they differ from one another only in the type of RNN cell they use. Training models with non-convex loss functions and non-smooth penalties requires a specialized optimization strategy, and we use a proximal gradient descent approach (ISTA). Our paper finds that ISTA provides comparable performance to two other approaches: proximal gradient descent with a line search (GISTA), which guarantees convergence to a local minimum, and Adam, which converges faster (although it requires an additional thresholding parameter). | 2,364 |
ibkuroyagi/Mechanisms-of-Action-Prediction | ['few shot learning'] | ['Generalizing from a Few Examples: A Survey on Few-Shot Learning'] | input/modules/trainer/tab_trainer.py working/node_train.py input/modules/losses/__init__.py input/modules/Qwicen/__init__.py working/calculate_dpgmm.py input/modules/utils/utils.py input/modules/losses/label_smooth_loss.py input/modules/facebookresearch/qhoptim.py input/modules/datasets/__init__.py input/modules/Qwicen/node.py input/modules/utils/preprocess.py input/modules/utils/__init__.py working/node_inference.py input/modules/datasets/Tab_dataset.py input/modules/utils/variables.py input/modules/facebookresearch/__init__.py MoaDataset from_accsgd QHM from_synthesized_nesterov from_nadam from_robust_momentum from_pid QHAdamW from_two_state_optimizer QHAdam SmoothBCEwLogits iterate_minibatches to_float_str check_numpy free_memory get_latest_file Entmoid15 _make_ix_like Entmax15Function StackingCNN nop_ctx TAB Lambda NODE DenseBlock ODST download process_in_chunks SparsemaxFunction md5sum ModuleWithInit to_one_hot TabTrainer mean_log_loss apply_zscore create_cluster preprocess_pipeline apply_pca create_dpgmm_proba preprocess reduce_columns apply_rank_gauss fe_stats seed_everything get_logger create_logger main main main sqrt sqrt view scatter_ size dim arange int callback list arange shuffle range len function slice tuple min zeros range Tensor numpy asarray isinstance glob md5 synchronize collect empty_cache sleep append astype range log_loss print reshape fit StandardScaler fit_transform values len print reshape fit fit_transform QuantileTransformer values len int print DataFrame concat PCA transform sum fit_transform fit append fit_transform VarianceThreshold print concat copy get_dummies fit BayesianGaussianMixture print concat predict_proba DataFrame fit skew kurtosis mean sum std map copy apply_zscore join create_cluster print apply_pca create_dpgmm_proba preprocess reduce_columns apply_rank_gauss fe_stats seed str manual_seed_all manual_seed is_available setFormatter format getLogger addHandler StreamHandler Formatter setLevel FileHandler apply_zscore BayesianGaussianMixture outdir warning ArgumentParser basicConfig list colorbar imshow title savefig parse_args update close predict_proba info vars items join add_argument figure apply_rank_gauss read_csv fit arange device values getattr to inference get reset_index MoaDataset mean_log_loss mean seed_everything enumerate MultilabelStratifiedKFold TabTrainer load_checkpoint to_csv preprocess_pipeline split zeros drop run scheduler_class optimizer_class optim lr_scheduler nn | ibkuroyagi/Mechanisms-of-Action-Prediction | 2,365 |
ibrahimayaz/Fila-GAN | ['style transfer'] | ['Synthesizing Filamentary Structured Images with GANs'] | Net.py Train.py vgg.py utils.py Opts.py StyleFeature.py dataBlocks.py np_load_func SequentialIterator SimpleBlocks DataIterator StepIterator DataBlocks matTonpy_35 build_data discriminator generator lrelu save_images TrainImgForTest resUnit_up lrelu1 TestImgForTest resUnit matTonpy inverse_transform imsave merge get_style_features get_content_features get_content_loss get_style_loss get_tv_loss get_img save_img list_files scale_img exists _pool_layer _conv_layer unprocess preprocess net load DataBlocks open batch_norm zeros enumerate resize_images mul transpose matmul preprocess float net resize_images preprocess mul net float get_style_features reduce_mean len float reduce_mean get_content_features len reduce_mean abs uint8 astype imsave shape float get_img imread dstack imresize walk extend _pool_layer relu reshape transpose _conv_layer mean enumerate conv2d constant | <p>"Synthesizing Filamentary Structured Images with GANs" makalesinden alıntıdır.</p><br> Makale linki : https://arxiv.org/pdf/1706.02185.pdf<br> Detaylı bilgi için:https://web.bii.a-star.edu.sg/archive/machine_learning/Projects/filaStructObjs/Synthesis/index.html adresini ziyaret etmelisiniz. | 2,366 |
icarofua/siamese-two-stream | ['vehicle re identification'] | ['A Two-Stream Siamese Neural Network for Vehicle Re-Identification by Using Non-Overlapping Cameras'] | siamese.py siamese_two_stream.py siamese_test.py config.py utils.py generate_dataset.py collecting_positive_samples distance_string collecting_negative_samples build_negative_set build_positive_set run siamese_model calculate_metrics siamese_model get_batch_inds generator load_img small_vgg test_report calculate_metrics run append product replace print list collecting_positive_samples print shuffle append len int list join replace distance_string glob print choice append collecting_negative_samples shuffle len build_negative_set build_positive_set convnet Adam small_vgg Model L1_layer Input compile confusion_matrix convnet_car convnet_plate Input argmax calculate_metrics format print tolist write close zip append next range predict open img_to_array append get_batch_inds arange to_categorical append zeros enumerate len load generator shuffle ProcessPoolExecutor fit_generator open test_report ceil ModelCheckpoint len | # Deprecated: see (https://github.com/icarofua/vehicle-ReId) in order to have the full dataset and new models. # Demo code for paper "A Two-Stream Siamese Neural Network for Vehicle Re-Identification by Using Non-Overlapping Cameras" (https://arxiv.org/abs/1902.01496). If you find this code useful in your research, please consider citing: @article{icaroICIP2019, title={A Two-Stream Siamese Neural Network for Vehicle Re-Identification by Using Non-Overlapping Cameras}, author={de Oliveira, Icaro O and Fonseca, Keiko VO and Minetto, Rodrigo}, journal={IEEE International Conference on Image Processing (ICIP)}, year={2019} } ## Authors | 2,367 |
icc2115/Neural-GC | ['sparse learning'] | ['A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems'] | models/__init__.py models/clstm.py models/cmlp.py models/crnn.py synthetic.py models/model_helper.py lorenz simulate_var simulate_lorenz_96 make_var_stationary cLSTMSparse ridge_regularize regularize cLSTM train_model_adam prox_update arrange_input LSTM train_unregularized restore_parameters train_model_ista train_model_gista cMLPSparse ridge_regularize MLP regularize train_unregularized train_model_adam prox_update cMLP restore_parameters train_model_ista train_model_gista cRNN RNN ridge_regularize cRNNSparse regularize train_model_adam prox_update arrange_input train_unregularized restore_parameters train_model_ista train_model_gista activation_helper hstack vstack eigvals abs max seed int normal hstack choice dot flatten eye make_var_stationary zeros range zeros range len seed normal odeint linspace zeros range flatten_parameters clamp weight_ih_l0 norm weight_ih_l0 parameters zip zeros range len zero_grad list p net_copy MSELoss append range cat ridge_regularize grad mean zip float net pop deepcopy backward print prox_update parameters loss_fn deepcopy list inf backward print step zero_grad Adam MSELoss parameters zip append sum restore_parameters range cat detach zero_grad list MSELoss append sum restore_parameters range cat detach inf mean zip float networks deepcopy backward print prox_update parameters clstm deepcopy list inf backward print step zero_grad Adam MSELoss parameters zip append sum restore_parameters range cat detach shape weight range shape weight lag lag grad lag lag cmlp crnn Tanh LeakyReLU Sigmoid ReLU | # Neural Granger Causality The `Neural-GC` repository contains code for a deep learning-based approach to discovering Granger causality networks in multivariate time series. The methods implemented here are described in [this paper](https://arxiv.org/abs/1802.05842). ## Installation To install the code, please clone the repository. All you need is `Python 3`, `PyTorch (>= 0.4.0)`, `numpy` and `scipy`. ## Usage See examples of how to apply our approach in the notebooks `cmlp_lagged_var_demo.ipynb`, `clstm_lorenz_demo.ipynb`, and `crnn_lorenz_demo.ipynb`. ## How it works The models implemented in this repository, called the cMLP, cLSTM and cRNN, are neural networks that model multivariate time series by forecasting each time series separately. During training, sparse penalties on the input layer's weight matrix set groups of parameters to zero, which can be interpreted as discovering Granger non-causality. The cMLP model can be trained with three different penalties: group lasso, group sparse group lasso, and hierarchical. The cLSTM and cRNN models both use a group lasso penalty, and they differ from one another only in the type of RNN cell they use. Training models with non-convex loss functions and non-smooth penalties requires a specialized optimization strategy, and we use a proximal gradient descent approach (ISTA). Our paper finds that ISTA provides comparable performance to two other approaches: proximal gradient descent with a line search (GISTA), which guarantees convergence to a local minimum, and Adam, which converges faster (although it requires an additional thresholding parameter). | 2,368 |
iceli1007/ICIAR2018_BACH_Challenge-DA-Refinenet | ['medical image segmentation', 'whole slide images', 'semantic segmentation'] | ['DA-RefineNet:A Dual Input Whole Slide Image Segmentation Algorithm Based on Attention'] | models/mobilenet.py src/get_accuracy.py src/util.py src/setup.py src/test.py src/train.py utils/layer_factory.py utils/helpers.py src/config.py models/resnet.py src/datasets.py InvertedResidualBlock MBv2 mbv2 Refinenet_double rf_lw101 rf_lw152 RefineNet ResNetLW_CA Refinenet_double_concat RefineNet_CA rf_lw50 Bottleneck Resnet ResNetLW ResNetLW_double Refinenet_double_attention ResNet152LW_double Refinenet_mylt_double BasicBlock RandomCrop_double Pad RandomMirror_double one_hot ICIARDataset ICIAR_double_Dataset RandomMirror ToTensor Normalise resize_double RandomCrop one_hot_double ToTensor_double ResizeShorterScale_double Normalise_double ResizeShorterScale Pad_double get_score ICIAR_test_Dataset get_sensitivity ToTensor Normalise Resize get_accuracy get_precision Normalise_double ToTensor_double get_specificity ICIAR_test_Dataset ToTensor Normalise Resize Normalise_double ToTensor_double create_optimisers validate dice_loss load_ckpt create_segmenter create_loaders train_segmenter BinaryDiceLoss DiceLoss main DiceLoss_1 get_arguments AverageMeter compute_params Saver prepare_img Visualizer maybe_download RCUBlock conv1x1 convbnrelu CRPBlock RRBBlock conv3x3 batchnorm RRBBlock_1 len get lower MBv2 load_state_dict maybe_download update load list print named_parameters load_state_dict Refinenet_double update load list print named_parameters load_state_dict Refinenet_double update load list print named_parameters load_state_dict Refinenet_double mean shape round zip append float sum prod sum maximum shape array round zip append float abs prod mean round zip append float sum mean round zip append float sum mean round zip append float sum add_argument ArgumentParser print size contiguous sum view format Compose DataLoader info Dataset len Adam SGD load items list get format load_state_dict ckpt_path info exists segm_crit zero_grad modules interpolate cuda append update format segmenter eval avg item info float long enumerate time isinstance backward set_stage AverageMeter BatchNorm2d train step len compute_iu format set_stage mean eval info zeros validate getLogger ignore_label set_start_method Saver ckpt_path save cuda seed str list show compute_params num_classes val_dir append range get_arguments train_dir manual_seed_all num_stages format plot create_loaders optim_dec normalise_params best_val info manual_seed is_available random_seed create_optimisers time load_ckpt evaluate print train_segmenter num_workers named_parameters match enc_pretrained enc empty_cache bool len numel named_parameters join format urlretrieve write getenv expanduser makedirs | # DA-Refinenet we proposes a dual input semantic segmentation network for WSI based on attention the paper's address :https://arxiv.org/abs/1907.06358 Abstract—Due to the high resolution of pathological images, the automated semantic segmentation in the medical pathological images has shown greater challenges than that in natural images. Sliding Window method has shown its effect on solving problem caused by the high resolution of whole slide images (WSI). However, owing to its localization, Sliding Window method also suffers from lack of global information. In this paper, a dual input | 2,369 |
idea-iitd/graphgen | ['graph generation'] | ['GraphGen: A Scalable Approach to Domain-agnostic Labeled Graph Generation'] | baselines/graph_rnn/helper.py graphgen/model.py baselines/graph_rnn/train.py train.py args.py dfscode/dfs_wrapper.py graphgen/data.py baselines/dgmg/model.py evaluate.py baselines/dgmg/train.py utils.py baselines/dgmg/data.py datasets/process_dataset.py metrics/stats.py main.py metrics/setup.py model.py metrics/mmd.py datasets/preprocess.py graphgen/train.py baselines/graph_rnn/data.py baselines/graph_rnn/model.py Args create_model test_data train evaluate_loss train_epoch get_model_attribute save_graphs save_model load_model create_dirs load_graphs mkdir DGMG_Dataset_from_file create_model DGMG dgmg_message_weight_init ChooseDestAndUpdate AddNode weights_init GraphProp GraphEmbed AddEdge bernoulli_action_log_prob predict_graphs evaluate_loss Graph_Adj_Matrix_from_file Graph_Adj_Matrix get_attributes_len_for_graph_rnn graph_to_matrix create_model predict_graphs evaluate_loss graphs_to_min_dfscodes mapping get_bfs_seq graph_to_min_dfscode calc_max_prev_node min_dfscodes_to_tensors calc_max_prev_node_helper dfscode_to_tensor get_random_bfs_seq dfscodes_weights random_walk_with_restart_sampling dfscode_from_file_to_tensor_to_file check_graph_size produce_graphs_from_raw_format produce_random_walk_sampled_graphs produce_graphs_from_graphrnn_format create_graphs sample_subgraphs get_min_dfscode graph_from_dfscode Graph_DFS_code Graph_DFS_code_from_file RNN create_model MLP_Softmax MLP_Log_Softmax MLP_Plain predict_graphs evaluate_loss kernel_compute emd gaussian compute_mmd gaussian_emd node_label_worker write_graphs_from_dir node_label_and_degree_joint_stats clustering_worker novelity degree_worker nspdk_stats degree_stats edge_list_reindexed node_label_stats clustering_stats orca orbits_counts_worker orbit_stats_all edge_label_worker node_label_and_degree_worker uniqueness edge_label_stats create_model_graph_rnn create_model_dgmg create_model_graphgen eval_loss_graph_rnn eval_loss_dfscode_rnn eval_loss_dgmg items list format log_tensorboard backward add_scalar zero_grad note clip_grad_value_ parameters graph_type gradient_clipping train step evaluate_loss enumerate len list items eval len save_model MultiStepLR device get_model_attribute list load_model Adam SummaryWriter format log_tensorboard graph_type items print load_model_path note train_epoch test_data epochs add_scalar isdir exit rmtree eval input makedirs listdir range len rmtree model_save_path current_temp_path temp_path tensorboard_path makedirs str list items makedirs fname save current_model_save_path state_dict load items list load_state_dict to load data isinstance orthogonal_ normal_ parameters xavier_normal_ GRUCell Linear ModuleList isinstance apply cat to len batch_size subgraph device max count get_model_attribute list load_model to_undirected nodes edges append range add_edge create_model Graph eval model_path add_node items connected_components train_args data get_attributes_len_for_graph_rnn ones reshape nodes cat get_random_bfs_seq zeros relabel_nodes tril_ len get_attributes_len_for_graph_rnn max_head_and_tail max_prev_node len data binary_cross_entropy num_layers max_head_and_tail device max get_attributes_len_for_graph_rnn view bincount to range cat pack_padded_sequence size pad_packed_sequence item sort reshape min extend index_select zeros init_hidden max_prev_node data num_layers max_head_and_tail get_attributes_len_for_graph_rnn exit max_num_node cat size print reshape min zeros init_hidden max_prev_node len load data dump endswith print len close nodes min tqdm edges listdir max open dict bfs_successors get_bfs_seq nodes randperm item from_numpy_matrix to_numpy_matrix len endswith listdir print append int enumerate dfscode_to_tensor endswith listdir append min edges get_random_bfs_seq append range max relabel_nodes len print len add_edge list Graph neighbors item range add_node int add_edge Graph is_connected set add range add_node arange selfloop_edges subgraph is_connected max str list map nodes range clustering add_edges_from Graph astype remove_edges_from remove_nodes_from add_node set_edge_attributes join isolates loadtxt convert_node_labels_to_integers int selfloop_edges is_connected convert_node_labels_to_integers sqrt remove_edges_from random_walk_with_restart_sampling range join remove format selfloop_edges print Graph convert_node_labels_to_integers endswith shuffle rename remove_edges_from append listdir enumerate len dataset_path produce_graphs_from_raw_format min_dfscode_path mapping current_dataset_path graphs_to_min_dfscodes min_dfscodes_to_tensors exit produce_graphs_from_graphrnn_format produce_random_walk_sampled_graphs current_temp_path format graph_type mkdir join produce_graphs time num_graphs print len mkstemp remove close Graph int add_edge add_node float transpose nll_loss weights expand mean sum selfloop_edges graph_from_dfscode remove_edges_from sample max_num_edges astype float max len emd norm preprocess vectorize max kernel_compute print compute_mmd now gaussian_emd zeros nodes len print compute_mmd nodes now gaussian_emd len zeros edges len print compute_mmd now gaussian_emd edges len list histogram values print compute_mmd now dict nodes append edges str remove number_of_nodes name check_output strip write close len edge_list_reindexed number_of_edges find array open orca sum number_of_nodes print compute_mmd array now zeros nodes len print compute_mmd nodes now gaussian_emd len print compute_mmd now remove format write_graphs_from_dir print close set mkstemp intersection len remove format write_graphs_from_dir print close mkstemp len | # GraphGen: A Scalable Approach to Domain-agnostic Labeled Graph Generation This repository is the official PyTorch implementation of GraphGen, a generative graph model using auto-regressive model. Nikhil Goyal, Harsh Vardhan Jain, and [Sayan Ranu](http://www.cse.iitd.ac.in/~sayan/index.html), [GraphGen: A Scalable Approach to Domain-agnostic Labeled Graph Generation](https://arxiv.org/pdf/2001.08184.pdf), in WWW, 2020. Most of the code has been adapted from [GraphRNN](https://github.com/snap-stanford/GraphRNN) ## Installation We recommend [anaconda](https://www.anaconda.com/distribution/) distribution for Python and other packages. The code has been tested over [PyTorch 1.2.0](https://pytorch.org/) version with Python 3.7.0. Pytorch and pip installation in conda. Change cuda version as per your GPU hardware support. ```bash conda install pip pytorch=1.2.0 torchvision cudatoolkit=10.1 -c pytorch ``` | 2,370 |
idiap/psfestimation | ['depth estimation'] | ['Spatially-Variant CNN-based Point Spread Function Estimation for Blind Deconvolution and Depth Estimation in Optical Microscopy'] | train.py figure_noise_resistance.py figure_depth.py generate_training_set.py evaluation_accuracy.py toolbox.py benchmark_models.py test_plane_graph test Example Figure_Canvas test_everything print_noisy test_plane_stats test_noise eval_regression_accuracy test_moving_window do_convolution_gaussian handle_list create_hdf5 do_convolution_zernike get_parser pickle_load plot_images multipage create_dir to_16_bit Params unpad center_crop rand_float stretch_contrast to_32_bit gaussian_kernel normalize to_radial center_crop_pixel get_wavefront random_crop write_tiff_stack get_psf scale rand_int to_radian to_8_bit convolve pickle_save scale3d noisy isdtype load_oneimage DatasetFromHdf5 BasicBlockX l2variance resnext50 l2loss Bottleneck make_dataset ImageFolder resnet34 save Noise load_crops BasicBlock find_classes resnext18 resnet34_pretrained resnet50_pretrained resnet18 test_image resnext34 BottleneckX grayloader ResNet ResNeXt has_file_allowed_extension load DatasetFolder To_224 conv3x3 train load basicConfig format getLogger eval_regression_accuracy removeHandler eval info load_crops setLevel INFO update join list items isinstance concat to_csv test copy append DataFrame update join list format items isinstance print concat min to_csv copy test linspace info DataFrame max random_open_crop format astype to_long scale Tensor float numpy center_crop_pixel imsave pi polyfit linspace max show tan ylabel ylim savefig legend __abs__ fit_fn format errorbar asarray plot tight_layout mean r2_score enumerate var poly1d print xlabel rc min subplots_adjust figure format test_moving_window test_plane_graph glob print to_csv now append DataFrame len children model round cuda max FloatTensor center_crop imshow title append imread imsave range format asarray astype in_channels scale load time To_224 print reshape min int32 figure out_features zeros train numpy update asarray format info len r2_score append range enumerate add_argument ArgumentParser handle_file int random_generate format print black synthetic exit write close create_dir points scale zeros imread range open psf_size rand_int rand_float gaussian_kernel get_psf rand_float choice Params psf_size sorted format remove glob print len PdfPages close savefig makedirs rollaxis write_image print close flipud range imsave open min max flatten shape norm eps BZ2File dump close open add_subplot axis clf ion number set_title matshow transpose colorbar imshow ceil append range set_xlim autoscale sqrt enumerate deepcopy suptitle figure set_ylim len print randint sum var normal list FloatTensor ones size rotate log2 gaussian_kernel scale unique ceil Tensor float numpy max zeros poisson len eps sum eps norm all print isfinite abs max print uint8 isdtype print uint16 isdtype print float32 isdtype int round sqrt int floor print zeros arange load BZ2File close open unpad shape pad normalize max fftconvolve tilt_angle exp size to_radial cos pi sph sin tilt ast focus coma_angle ast_angle coma roll floor linspace abs unpad pad meshgrid ceil sum get_wavefront size magnification na fft2 astype power float tubelength n int float32 wavelength pixelsize imread lower sort join sorted has_file_allowed_extension append expanduser listdir walk DatasetFromHdf5 format imsave print Compose exit base DataLoader info expanduser numpy read_csv grayloader DataLoader cuda cuda cuda cuda cuda resnet34 in_features cuda Linear resnet50 cuda in_features Linear mean mean model strip zero_grad l2loss adjust_learning_rate around save cuda list step Adam append test_image range format param_groups info backward parameters numpy len strip format info strip format info children format model l2loss eval numpy info float round cuda len | # PSF Estimation Code for the PyTorch implementation of "Spatially-Variant CNN-based Point Spread Function Estimation for Blind Deconvolution and Depth Estimation in Optical Microscopy", IEEE Transactions on Image Processing, 2020. https://ieeexplore.ieee.org/document/9068472 ## Abstract Optical microscopy is an essential tool in biology and medicine. Imaging thin, yet non-flat objects in a single shot (without relying on more sophisticated sectioning setups) remains challenging as the shallow depth of field that comes with highresolution microscopes leads to unsharp image regions and makes depth localization and quantitative image interpretation difficult. Here, we present a method that improves the resolution of light microscopy images of such objects by locally estimating image distortion while jointly estimating object distance to the focal plane. Specifically, we estimate the parameters of a spatiallyvariant Point Spread Function (PSF) model using a Convolutional Neural Network (CNN), which does not require instrument- or object-specific calibration. Our method recovers PSF parameters from the image itself with up to a squared Pearson correlation coefficient of 0.99 in ideal conditions, while remaining robust to object rotation, illumination variations, or photon noise. When the recovered PSFs are used with a spatially-variant and regularized Richardson-Lucy (RL) deconvolution algorithm, we observed up to 2.1 dB better Signal-to-Noise Ratio (SNR) compared to other Blind Deconvolution (BD) techniques. Following microscope-specific calibration, we further demonstrate that the recovered PSF model parameters permit estimating surface depth with a precision of 2 micrometers and over an extended range when using engineered PSFs. Our method opens up multiple possibilities for enhancing images of non-flat objects with minimal need for a priori knowledge about the optical setup. ## Requirements The following python libraries are required. We advise the use of the conda package manager. > numpy > scikit-image > pytorch | 2,371 |
idiap/tsoftmax | ['unity'] | ['A t-distribution based operator for enhancing out of distribution robustness of neural network classifiers'] | src/plot_fom.py src/convnet.py src/densenet.py src/tsoftmax.py src/get_FOM.py src/train.py src/get_conf.py src/utils.py ConvNet DenseNet TransitionBlock BottleneckBlock DenseBlock BasicBlock main confidence test main main main train save_model test TSoftmax Quadratic TLogSoftmax get_eer get_normal get_fprtpr95 T backward nll_loss last_layer zero_grad argmax out time format print_time print confidence eval mode item cpu zeros dataset round enumerate len data DataLoader ArgumentParser SVHN device arch DataFrame get_normal seed FashionMNIST load_state_dict parse_args to LSUN format save_path Compose lsun_path test CIFAR10 manual_seed MNIST FakeData load print add_argument to_csv KMNIST mode test_data quit makedirs subplots linspace round exists show exp set_title set_yscale set_xlabel precision_recall_curve legend sum ood reset_index plot concatenate size get_fprtpr95 get_eer auc set_index min roc_curve set_ylabel hist to_numpy read_csv len arange max csv_dest tolist bar range set_xticklabels unique enumerate set_xticks set_ylim format state_dict save makedirs backward model nll_loss zero_grad step enumerate save_model SGD MultiStepLR milestones epochs nu train parameters step nanargmin absolute argmax print quit format | # _t_-softmax pytorch reproducibility code This repository contains the code to reproduce the results of the paper: [Niccolò Antonello, Philip N. Garner "A _t_-distribution based operator for enhancing out of distribution robustness of neural network classifiers," IEEE Signal Processing Letters, 2020](https://arxiv.org/abs/2006.05389) The code is based on the [Pytorch machine learning library](https://github.com/pytorch/pytorch). If you want to use _t_-softmax in your classifiers/neural networks you can find the modules in `src/tsoftmax.py`. ## Installation We use [conda](https://docs.conda.io/en/latest/miniconda.html) to create a reproducible environment. Run: ``` conda env create -f conda_env.yml | 2,372 |
idiap/wmil-sgd | ['sentiment analysis', 'multiple instance learning'] | ['Explicit Document Modeling through Weighted Multiple-Instance Learning'] | wmil_sgd.py | <p><b>wmil-sgd</b> — This repository contains a Python implementation of the weighted multiple-instance learning (<a href="https://github.com/nik0spapp/wmil">wmil</a>) algorithm based on stochastic gradient descent which was presented at JAIR 2017 paper [<a href="http://publications.idiap.ch/downloads/papers/2017/Pappas_JAIR_2017.pdf">1</a>]. This algorithm, which was originally proposed at EMNLP 2014 paper [<a href="http://publications.idiap.ch/downloads/papers/2014/Pappas_EMNLP14_2014.pdf">2</a>], is a weakly supervised learning model, which jointly learns to focus on relevant parts of a document according to the context along with a classifier for the target categories. The model takes as input a document Bi (bag), which consists of multiple input vectors <i>b<sub>ij</sub></i> (instances), possibly from a neural network. The model learns to compute a weighted average of these vectors by estimating the weights <i>ψ<sub>ij</sub></i> for each document <i>B<sub>i</sub></i> and its target categories <i>y<sub>i</sub></i> <i>∈ R<sup>k</sup></i>. <p align="center"> <img align="center" src="images/overview.png" alt="Explicit document modeling using weighted multiple-instance learning" width="650"/> </p> </p> ``` @ARTICLE{Pappas_JAIR_2017, author = {Pappas, Nikolaos and Popescu-Belis, Andrei}, title = {Explicit Document Modeling through Weighted Multiple-Instance Learning}, | 2,373 |
idirlab/claimspotter | ['text classification'] | ['Gradient-Based Adversarial Training on Transformer Networks for Detecting Check-Worthy Factual Claims'] | adv_transformer/core/api/__init__.py adv_transformer/core/models/model.py adv_transformer/core/utils/freq_plot.py adv_transformer/core/models/ctransf/albert.py adv_transformer/core/models/ctransf/bert.py dba_app.py adv_transformer/core/models/ctransf/modeling_auto.py bidirectional_lstm/__init__.py data/glove/glove_to_w2v.py adv_transformer/distrib_analysis.py data/__init__.py adv_transformer/eval.py bidirectional_lstm/bilstm-train.py adv_transformer/core/utils/data_loader.py adv_transformer/core/models/ctransf/distilbert.py adv_transformer/core/utils/flags.py svm/__init__.py bidirectional_lstm/bilstm-adv-train.py adv_transformer/clef_eval_2019.py adv_transformer/logit_analysis.py svm/svm-train.py adv_transformer/demo.py adv_transformer/clef_eval_2020_task5.py adv_transformer/core/models/ctransf/modeling_tf_outputs.py data/word2vec/w2v_to_txt.py adv_transformer/clef_eval_2020_task1.py adv_transformer/core/__init__.py adv_transformer/__init__.py adv_transformer/benchmark.py adv_transformer/core/models/__init__.py adv_transformer/core/models/ctransf/roberta.py adv_transformer/core/utils/transformations.py adv_transformer/core/utils/__init__.py adv_transformer/core/api/api_wrapper.py adv_transformer/train.py __init__.py adv_transformer/core/utils/compute_ndcg.py adv_transformer/core/utils/process_results.py get_user_input generate_sentence get_score compute_ndcg compute_average_precision compute_dcg_term compute_precisions main main get_score compute_kde main train_model main eval_model ClaimSpotterAPI ClaimSpotterModel ClaimSpotterLayer TFAlbertLayerGroup TFAlbertForTokenClassification TFAlbertForPreTrainingOutput TFAlbertForQuestionAnswering TFAlbertTransformer TFAlbertModel TFAlbertSOPHead TFAlbertForSequenceClassification TFAlbertEmbeddings TFAlbertSelfOutput TFAlbertForMultipleChoice TFAlbertLayer TFAlbertAttention TFAlbertPreTrainedModel TFAlbertForMaskedLM TFAlbertMLMHead TFAlbertForPreTraining TFAlbertMainAdvLayer TFBertModel TFBertLMHeadModel TFBertAttention TFBertPreTrainedModel TFBertForPreTraining TFBertIntermediate TFBertNSPHead TFBertMainAdvLayer TFBertForMaskedLM TFBertSelfAttention TFBertLayer TFBertPredictionHeadTransform TFBertEmbeddings TFBertEncoder TFBertPooler TFBertOutput TFBertForSequenceClassification TFBertForTokenClassification TFBertForPreTrainingOutput TFBertSelfOutput TFBertForNextSentencePrediction TFBertMLMHead TFBertForMultipleChoice TFBertForQuestionAnswering TFBertLMPredictionHead TFDistilBertMainAdvLayer TFDistilBertForMaskedLM TFDistilBertPreTrainedModel TFFFN TFDistilBertForQuestionAnswering TFMultiHeadSelfAttention TFDistilBertModel TFTransformerBlock TFDistilBertForSequenceClassification TFDistilBertForTokenClassification TFEmbeddings TFDistilBertForMultipleChoice TFTransformer TFDistilBertLMHead TFAutoModelForSequenceClassification TFAutoModel TFAutoModelForQuestionAnswering TFAutoModelWithLMHead TFAutoModelForMultipleChoice TFAutoModelForTokenClassification TFAutoModelForPreTraining TFSeq2SeqQuestionAnsweringModelOutput TFCausalLMOutput TFTokenClassifierOutput TFSeq2SeqLMOutput TFMultipleChoiceModelOutput TFCausalLMOutputWithPast TFSeq2SeqModelOutput TFQuestionAnsweringModelOutput TFNextSentencePredictorOutput TFBaseModelOutput TFBaseModelOutputWithPast TFMaskedLMOutput TFSequenceClassifierOutput TFBaseModelOutputWithPooling TFSeq2SeqSequenceClassifierOutput TFRobertaForMultipleChoice TFRobertaForMaskedLM TFRobertaForSequenceClassification TFRobertaSelfOutput TFRobertaEmbeddings TFRobertaClassificationHead TFRobertaModel TFRobertaLayer TFRobertaForTokenClassification TFRobertaPooler TFRobertaPreTrainedModel TFRobertaAttention TFRobertaForQuestionAnswering TFRobertaOutput TFRobertaSelfAttention TFRobertaLMHead TFRobertaIntermediate TFRobertaMainAdvLayer TFRobertaEncoder compute_dcg_term compute_ndcg DataLoader Dataset clean_argv print_flags plot_stuff process_dataset char_list_to_string expand_contractions correct_mistakes remove_possessives transform_sentence_complete expand_sentence process_sentence_full_tags process_sentence_ner_spacy get_sentiment list_to_string get_tags load_dependencies strip_chars remove_kill_words load_deps_dummy create_model grad compute_ndcg adv_grad compute_average_precision make_embedding compute_dcg_term loss compute_average_precision compute_dcg_term create_model compute_ndcg compute_kde trainModels evaluate getPOS get_sentiment compute_ndcg getPOSVector compute_average_precision compute_dcg_term getCFSScore compute_precisions get_et_vector getNgram single_sentence_query sorted min len range enumerate sorted min append sum enumerate len sum sorted min len to_csv read_csv ClaimSpotterAPI drop join listdir show remove exp subplots plot reshape set_ylim set_xlim random zip score_samples fit DataLoader cs_batch_size_reg load_custom_model argmax list load_testing_data cs_model_dir update format classification_report close compute_ndcg preds_on_batch stats_on_batch info f1_score batch get_length print warm_up tqdm ClaimSpotterModel numpy save_custom_model load_custom_model argmax tolist cs_restore_and_continue append range cs_model_dir update format close preds_on_batch stats_on_batch info f1_score batch time warm_up tqdm ClaimSpotterModel update format warm_up tolist close preds_on_batch tqdm cs_batch_size_reg load_custom_model info ClaimSpotterModel batch y get_n_splits StratifiedKFold strip print_flags exit tolist load_training_data train_test_split eval_model concatenate compute_class_weights_fold load_crossval_data train_model enumerate isdir cs_tb_dir rmtree class_weights array x split append info flag_values_dict info sorted read format set_title _compute_covariance plot gaussian_kde set_xlabel set_ylabel density linspace expand_sentence join append split list range reversed len join correct_mistakes expand_contractions remove_possessives print text_to_word_sequence sub remove_kill_words list transform_sentence_complete process_sentence_full_tags get_sentiment tqdm append range len TextBlob str str pos_tag word_tokenize append get_tags index list text nlp ents len print load cs_ner_spacy print download Embedding Sequential add compile set_weights Bidirectional Sequential add Dense LSTM Dropout model gradient set_weights Embedding compile todense set_index text to_csv map transform DataFrame word_tokenize list iterrows text map pos_tag zip DataFrame values dump sort_index print filter fit subplots StratifiedKFold concat plot_roc_curve balanced_accuracy_score linspace show roc_auc tolist legend fpr append tpr predict format plot classification_report compute_ndcg set compute_average_precision mean interp auc minimum sort_index print text fit maximum TfidfVectorizer filter split zeros fill_between std len pos_tag word_tokenize concat DataFrame open map normalize fit_transform format CountVectorizer __name__ load join T toarray set_index sort_index text index filter read_csv | # Claim Spotting In this repository we are providing code to train claim-spotting models via: - [SVM](https://github.com/idirlab/claimspotter/tree/master/svm) <a href="https://colab.research.google.com/github/idirlab/claimspotter/blob/master/svm/svm-notebook.ipynb" target="_parent\"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> - [Bi-Directional LSTM](https://github.com/idirlab/claimspotter/tree/master/bidirectional-lstm) <a href="https://colab.research.google.com/github/idirlab/claimspotter/blob/master/bidirectional_lstm/bilstm-notebook.ipynb" target="_parent\"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> - [Adversarial Training on a Transformer Model](https://github.com/idirlab/claimspotter/tree/master/bert-adversarial) <a href="https://colab.research.google.com/github/idirlab/claimspotter/blob/master/adv_transformer/adv_transformer-notebook.ipynb" target="_parent\"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## License The work in this repository is released under the [GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html). | 2,374 |
ieee8023/countception | ['object localization'] | ['Count-ception: Counting by Fully Convolutional Redundant Counting'] | count-ception.py test_perf SimpleFactory genGausImage getMarkersCells getDensity getLabelsCells compute_counts save_network getTrainingExampleCells processImages load_network getCellCountCells ConvFactory batch_norm Conv2DLayer ConvFactory ConcatLayer pdf dstack zeros range print maximum zeros imread sum range sum range print getDensity stride zeros getCellCountCells range getMarkersCells pad getLabelsCells stride arange subplot2grid Figure str list set_title canvas imshow bar savefig set_canvas range gcf format concatenate mkdir set_size_inches print set_xticks set_ylim str dump get_all_param_values close open load str set_all_param_values open list abs mean flatten append classify sum range len list append classify sum range len | ## Use this code instead: [https://github.com/roggirg/count-ception_mbm](https://github.com/roggirg/count-ception_mbm) ## Count-Ception: Counting by Fully Convolutional Redundant Counting ([arXiv](http://arxiv.org/abs/1703.08710)) ### Joseph Paul Cohen, Genevieve Boucher, Craig A. Glastonbury, Henry Z. Lo, Yoshua Bengio Counting objects in digital images is a process that should be replaced by machines. This tedious task is time consuming and prone to errors due to fatigue of human annotators. The goal is to have a system that takes as input an image and returns a count of the objects inside and justification for the prediction in the form of object localization. We repose a problem, originally posed by Lempitsky and Zisserman, to instead predict a count map which contains redundant counts based on the receptive field of a smaller regression network. The regression network predicts a count of the objects that exist inside this frame. By processing the image in a fully convolutional way each pixel is going to be accounted for some number of times, the number of windows which include it, which is the size of each window, (i.e., 32x32 = 1024). To recover the true count we take the average over the redundant predictions. Our contribution is redundant counting instead of predicting a density map in order to average over errors. We also propose a novel deep neural network architecture adapted from the Inception family of networks called the Count-ception network. Together our approach results in a 20% relative improvement (2.9 to 2.3 MAE) over the state of the art method by Xie, Noble, and Zisserman in 2016. Citation request: Count-ception: Counting by Fully Convolutional Redundant Counting<br> JP Cohen, G Boucher, CA Glastonbury, HZ Lo, Y Bengio<br> International Conference on Computer Vision (ICCV) Workshop on Bioimage Computing ``` @inproceedings{Cohen2017, | 2,375 |
ieee8023/xray-generalization | ['domain generalization'] | ['On the limits of cross-domain generalization in automated X-ray prediction'] | train_utils.py models/densenet.py train-joe.py datasets/xray.py train valid_test_epoch train_epoch tqdm NIH_XrayDataset Openi_XrayDataset XrayDataset Kaggle_XrayDataset XRayCenterCrop ToPILImage PC_XrayDataset CheX_XrayDataset relabel_dataset FilterDataset NIH_Google_XrayDataset Merge_XrayDataset MIMIC_XrayDataset XRayResizer normalize DenseNet get_densenet_params _DenseLayer _DenseBlock _Transition list _decr_instances hasattr _instances model DataLoader output_dir save dataset BCEWithLogitsLoss cuda max seed name Adam pprint load_state_dict append to range state_dict manual_seed_all format glob mean manual_seed num_epochs int random_split join print makedirs train_epoch parameters array len model zero_grad set_description numpy features max featurereg default_pathologies step nansum append to sum range taskweights mean float enumerate criterion backward print label_concat_reg reshape labels weightreg tqdm train weight len asarray print tqdm mean eval range astype float32 list format T print index difference nan fill empty pathologies append dict | # xray-generalization You probably want this updated code instead: https://github.com/mlmed/torchxrayvision All the code will be converted to us that library. ## Citation This is the code for the following paper: ``` Cohen, J. P., Hashir, M., Brooks, R., & Bertrand, H. On the limits of cross-domain generalization in automated X-ray prediction. Medical Imaging with Deep Learning 2020 (Online: [https://arxiv.org/abs/2002.02497](https://arxiv.org/abs/2002.02497)) @inproceedings{cohen2020limits, | 2,376 |
ifnspaml/SGDepth | ['depth estimation', 'monocular depth estimation', 'semantic segmentation'] | ['Self-Supervised Monocular Depth Estimation: Solving the Dynamic Object Problem by Semantic Guidance'] | colors/plasma.py losses/segmentation.py eval_depth.py dataloader/pt_data_loader/dataset_parameterset.py losses/baselosses.py models/networks/pose_decoder.py dataloader/pt_data_loader/basedataset.py models/networks/partial_decoder.py loaders/segmentation/validation.py dataloader/pt_data_loader/specialdatasets.py colors/cityscapes.py train.py dataloader/eval/metrics.py colors/__init__.py loaders/__init__.py loaders/depth/validation.py inference.py loaders/segmentation/__init__.py models/networks/__init__.py loaders/segmentation/train.py dataloader/file_io/dir_lister.py loaders/pose/validation.py dataloader/file_io/get_path.py losses/__init__.py models/layers/__init__.py dataloader/data_preprocessing/kitti_2015_generate_depth.py dc_masking.py loaders/depth/train.py models/layers/grad_scaling_layers.py loaders/pose/__init__.py models/sgdepth.py dataloader/pt_data_loader/mytransforms.py loaders/depth/__init__.py models/networks/resnet_encoder.py arguments.py loaders/fns.py colors/tango.py eval_segmentation.py state_manager.py models/networks/multi_res_output.py dataloader/definitions/labels_file.py dataloader/data_preprocessing/kitti_utils.py perspective_resample.py losses/depth.py harness.py eval_pose.py dataloader/data_preprocessing/download_kitti.py timer.py TrainingArguments InferenceEvaluationArguments ArgumentsBase SegmentationEvaluationArguments DepthEvaluationArguments PoseEvaluationArguments DCMasking DepthEvaluator PoseEvaluator SegmentationEvaluator Harness Inference PerspectiveResampler Timer Trainer seg_idx_image domain_prob_image depth_norm_image _depth_to_percentile_normalized_disp seg_prob_image surface_normal_image adjust_projectedvelodyne_folders download_kitti_all generate_depth_from_velo adjust_improvedgt_folders sub2ind read_calib_file read_depth load_velodyne_points pcl_to_depth_map ClassDefinitions SegmentationRunningScore AverageMeter Evaluator DepthRunningScore dump_xyz compute_ate PoseRunningScore DirLister GetPath BaseDataset DatasetParameterset ExchangeStereo CenterCrop ToTensor LoadRGB Relabel OneHotEncoding RandomCrop RemapKeys NormalizeZeroMean ConvertSegmentation RemoveRightStereo LoadSegmentation MultiResize SidesCrop CreateColoraug RandomRotate Resize CreateScaledImage RandomHorizontalFlip RandomExchangeStereo GaussianBlurr RandomVerticalFlip LoadFlow RandomRescale LoadDepth RandomTranslate ConvertFlow RemoveOriginals ConvertDepth LoadNumerics AdjustKeys AddKeyValue ColorJitter StandardDataset get _validation_clamp_cityscapes _validation_mask_kitti_kitti _validation_clamp_kitti _validation_mask_kitti_zhou _validation_mask_cityscapes LoaderList FixedLengthLoaderList ChainedLoaderList kitti_odom09_train kitti_benchmark_train kitti_kitti_train kitti_zhou_train kitti_2015_train kitti_zhou_validation kitti_kitti_validation kitti_zhou_test kitti_odom09_validation kitti_odom10_validation cityscapes_train cityscapes_validation kitti_2015_train SmoothnessLoss KnowledgeDistillationCELossUmbertoOld KnowledgeDistillationCELossUmbertoNew BackgroundLoss KnowledgeDistillationCELoss CrossEntropyLoss2d KnowledgeDistillationCELossWithGradientScaling BackgroundCELoss BinaryCrossEntropyLoss SSIM KnowledgeDistillationCELossUmbertoOldWithBackground FocalLoss CrossEntropyLoss OhemCELoss DepthLosses SegLosses RemappingScore SGDepthSeg SGDepthCommon SGDepthPose SGDepth SGDepthDepth TestScaledSplit ScaleGrad ScaledSplit GRL MultiRes MultiResDepth MultiResSegmentation UpSkipBlock PartialDecoder PreConvBlock PoseDecoder ResnetEncoder transpose transpose gather unsqueeze expand sort view clamp expand _depth_to_percentile_normalized_disp unsqueeze gather pad permute join remove GetPath move extractall close extend rmtree get_data_path rename download listdir ZipFile values makedirs join GetPath include_dirs_by_name move glob print makedirs rmtree get_data_path get_directories split join GetPath include_dirs_by_name move glob print makedirs rmtree get_data_path get_directories split join GetPath include_dirs_by_name uint16 imwrite glob pcl_to_depth_map print makedirs astype get_data_path get_directories split join T sub2ind int read_calib_file reshape hstack min dot shape vstack round eye zeros load_velodyne_points set reshape float astype append dot eye sqrt sum int zeros_like int zeros_like clamp clamp print DataLoader StandardDataset ConcatDataset print DataLoader StandardDataset ConcatDataset print DataLoader StandardDataset ConcatDataset print DataLoader StandardDataset ConcatDataset print DataLoader StandardDataset print DataLoader StandardDataset print DataLoader StandardDataset print DataLoader StandardDataset print DataLoader StandardDataset print DataLoader StandardDataset getlabels StandardDataset print DataLoader gettrainid2label len getlabels StandardDataset print DataLoader gettrainid2label len getlabels gettrainid2label len | # Self-Supervised Monocular Depth Estimation: Solving the Dynamic Object Problem by Semantic Guidance [Marvin Klingner](https://www.tu-braunschweig.de/en/ifn/institute/team/sv/klingner), [Jan-Aike Termöhlen](https://www.tu-braunschweig.de/en/ifn/institute/team/sv/termoehlen), Jonas Mikolajczyk, and [Tim Fingscheidt](https://www.tu-braunschweig.de/en/ifn/institute/team/sv/fingscheidt) – ECCV 2020 [Link to paper](https://arxiv.org/abs/2007.06936) ## ECCV Presentation <p align="center"> <a href="https://drive.google.com/file/d/1ZasoGhjkVP7crZGn1fMenT4yjv8rEYAW/view"> <img src="imgs/ECCV_presentation.jpg" alt="SGDepth video presentation ECCV" width="500"> </a> </p> ## Idea Behind the Method | 2,377 |
ifnspaml/perceptual-weighting-filter-loss | ['speech enhancement'] | ['A Perceptual Weighting Filter Loss for DNN Training in Speech Enhancement'] | GitHub_mask_dnn_weight_filter_train.py GitHub_all_test_mask_dnn_weight_filter.py GitHub_mask_dnn_baseline_train.py GitHub_all_test_mask_dnn_baseline.py reshapeDataMatrix reshapeDataMatrix reshapeDataMatrix reshapeDataMatrix zeros min max range | # Perceptual Weighting Filter Loss Please find here the scripts referring to the paper [A Perceptual Weighting Filter Loss for DNN Training in Speech Enhancement](https://arxiv.org/pdf/1905.09754.pdf). In this repository we provide the source code for training/validation data preparation (including the amplitude response for the perceptual weighting filter), network training/validation (including the proposed perceptual weighting filter loss), network inference, and enhanced speech waveform reconstruction. The code was written by [Ziyue Zhao](https://ziyuezhao.github.io/) and some of the contributions are from Ziyi Xu. ## LATEST Some Python code is updated to match the TensorFlow 2 (the original code was written for TensorFlow 1). See Prerequisites for detailed information about how to start. ## Introduction In this project, instead of applying the commonly used mean squared error (MSE) as the loss function during DNN training for single-channel speech enhancement, we designed a perceptual weighting filter loss. The proposed loss is motivated by the perceptual weighting filter in analysis-by-synthesis speech coding, e.g., in code-excited linear prediction (CELP). The proposed approach outperforms the reference DNN trained with MSE loss in terms of better PESQ and higher noise attenuation. ## Prerequisites and Installation - Nvidia GPU with CUDA and CuDNN (the code is tested with CUDA version 11.4) - Install [Anaconda](https://www.anaconda.com/) | 2,378 |
igloooo/weather-forecasting-video-prediction | ['video prediction'] | ['PredRNN: Recurrent Neural Networks for Predictive Learning using Spatiotemporal LSTMs'] | train.py ST_LSTM.py network.py data_generator.py prepare_data.py config.py learning_pytorch.py small test.py add_argument_group get_config str2bool BouncingMNISTDataHandler Encoder get_batches load_data SpatioTemporal_LSTM create_conv HighwayUnit activation_statistics showPlot evaluate initial_hiddens grad_statistics show_generation asMinutes trainIters main timeSince train make_dot append parse_known_args load shape format print range len set_major_locator subplots plot close MultipleLocator savefig figure floor time Digraph set dict grad_fn add_nodes append zeros range len list zip size reduce item append round range len parameters format decoder zeros_like initial_hiddens backward debug grad_statistics clip_grad_norm_ zero_grad encoder parameters info max_pool2d step range time showPlot print train Adam parameters GetBatch save append to range state_dict decoder zeros_like initial_hiddens encoder GetBatch max_pool2d to range subplot initial_hiddens print encoder imshow savefig figure max_pool2d to range detach load L1Loss BouncingMNISTDataHandler Encoder MSELoss trainIters load_state_dict device to | # PredRNN Contains PyTorch implementation of paper- PredRNN: Recurrent Neural Networks for Predictive Learning using Spatiotemporal LSTMs [link](https://papers.nips.cc/paper/6689-predrnn-recurrent-neural-networks-for-predictive-learning-using-spatiotemporal-lstms) # Datasets The Moving MNIST dataset can be downloaded [here](http://www.cs.toronto.edu/~nitish/unsupervised_video/) # Architecture  | 2,379 |
ignacio-rocco/cnngeometric_pytorch | ['geometric matching'] | ['Convolutional neural network architecture for geometric matching'] | util/train_test_fn.py data/synth_dataset.py train.py data/caltech_dataset.py image/normalization.py data/download_datasets.py geotnf/point_tnf.py model/loss.py geotnf/flow.py util/eval_util.py util/py_util.py model/cnn_geometric_model.py options/options.py util/torch_util.py geotnf/transformation.py demo.py geotnf/grid_gen.py eval.py data/tss_dataset.py data/pf_dataset.py util/dataloader.py main main CaltechDataset download_caltech download_TSS download_and_uncompress download_pascal download_PF_pascal download_PF_willow PFPascalDataset PFDataset SynthDataset TSSDataset th_sampling_grid_to_np_flow write_flo_file read_flo_file warp_image np_flow_to_th_sampling_grid TpsGridGen HomographyGridGen AffineGridGen AffineGridGenV2 unnormalize_axis compose_H_matrices PointTnf PointsToUnitCoords compose_aff_matrices compose_tps normalize_axis PointsToPixelCoords SynthPairTnf ComposedGeometricTnf SynthTwoStageTnf GeometricTnf AffineGridGen TpsGridGen SynthTwoStageTwoPairTnf SynthTwoPairTnf AffineGridGenV2 homography_mat_from_4_pts HomographyGridGen normalize_image NormalizeImageDict CNNGeometric FeatureExtraction FeatureRegression featureL2Norm FeatureCorrelation TransformedGridLoss ArgumentParser DataLoaderIter default_collate DataLoader _worker_loop ExceptionWrapper pin_memory_batch _pin_memory_loop poly_str_to_mask pck_metric obj_ptr flow_metrics pck localization_error eval_model_multistage intersection_over_union poly_to_mask area_metrics compute_metric label_transfer_accuracy create_file_path str_to_bool save_checkpoint BatchTensorToVars expand_dim train validate_model download_caltech parse download_TSS create_model batch_size print Dataset model_2 eval_dataset_path DataLoader eval BatchTensorToVars model_1 is_available download_PF_pascal download_PF_willow compute_metric SynthPairTnf trained_model_fn save_checkpoint validate_model tensor TransformedGridLoss seed CNNGeometric geometric_model Adam MSELoss range SummaryWriter SynthDataset close mkdir manual_seed CosineAnnealingLR float num_epochs trained_model_dir lr_scheduler join log_dir add_graph download_pascal min parameters feature_extraction_cnn use_mse_loss train endswith print extractall write close dirname open ZipFile exists makedirs print join basename download_and_uncompress print join basename download_and_uncompress print join basename download_and_uncompress join basename download_and_uncompress print rename print join basename download_and_uncompress print close float32 int32 resize fromfile open tofile astype float32 close array open uint8 grid_sample Variable astype unsqueeze np_flow_to_th_sampling_grid list concatenate Variable unsqueeze meshgrid cuda range normalize_axis unnormalize_axis list concatenate squeeze meshgrid numpy range clone normalize_axis expand_as unnormalize_axis clone expand_as view view Variable zeros cuda is_cuda cat is_available PointTnf tpsPointTnf view bmm view size contiguous squeeze expand unsqueeze inverse cuda is_cuda cat list isinstance Variable size add expand div unsqueeze cuda is_cuda expand_as seed get set_num_threads put collate_fn get pin_memory_batch isinstance put is_tensor list sum isinstance Sequence new zip _new_shared Mapping Mapping is_tensor isinstance Sequence compose_H_matrices model geoTnf GeometricTnf compose_aff_matrices homography_mat_from_4_pts compose_tps range num_of_iters flow_output_dir list format flatnonzero str print size model_2 eval eval_model_multistage batch_tnf zeros metric_fun keys enumerate len ne float size mean pow expand_as zeros le sum range data PointTnf list PointsToUnitCoords homPointTnf tnf_2 size pck pck_alpha numpy affPointTnf tnf_1 range tpsPointTnf PointsToPixelCoords homPointTnf tnf_2 localization_error unsqueeze linspace tnf_1 cuda poly_str_to_mask view affPointTnf meshgrid range cat th_sampling_grid_to_np_flow grid_sample size int PointTnf pointsToGrid Variable numpy tpsPointTnf homPointTnf tnf_2 unsqueeze linspace tnf_1 cuda view write_flo_file create_file_path affPointTnf meshgrid range cat th_sampling_grid_to_np_flow size int PointTnf join pointsToGrid Variable numpy flow_output_dir tpsPointTnf polygon zeros Variable fromstring unsqueeze poly_to_mask cuda mul float sum int list obj_ptr astype where mean meshgrid abs range list min where meshgrid max range dirname makedirs join basename copyfile dirname save makedirs size list format view model pair_generation_tnf backward print len zero_grad tqdm item loss_fn step enumerate add_scalar format view model pair_generation_tnf print add_scalar eval loss_fn enumerate | # CNNGeometric PyTorch implementation  This is the implementation of the paper: I. Rocco, R. Arandjelović and J. Sivic. Convolutional neural network architecture for geometric matching. [[website](http://www.di.ens.fr/willow/research/cnngeometric/)][[CVPR version](https://arxiv.org/abs/1703.05593)][[Extended TPAMI version](https://hal.archives-ouvertes.fr/hal-01859616/file/cnngeometric_pami.pdf)] ## Dependencies See `requirements.txt` ## Demo Please see the `demo.py` script or the `demo_notebook.ipynb` Jupyter Notebook. ## Training You can train the model using the `train.py` script in the following way: | 2,380 |
ignacio-rocco/ncnet | ['visual localization', 'semantic correspondence'] | ['Neighbourhood Consensus Networks'] | train.py lib/model.py lib/pf_dataset.py lib/normalization.py lib/point_tnf.py lib/py_util.py lib/conv4d.py lib/im_pair_dataset.py lib/torch_util.py eval_inloc.py lib/plot.py lib/transformation.py lib/dataloader.py lib/eval_util.py eval_pf_pascal.py process_epoch weak_loss Conv4d conv4d DataLoaderIter default_collate DataLoader _worker_loop ExceptionWrapper pin_memory_batch _pin_memory_loop pck_metric pck ImagePairDataset FeatureExtraction featureL2Norm MutualMatching NeighConsensus ImMatchNet maxpool4d FeatureCorrelation normalize_image NormalizeImageDict PFPascalDataset plot_image save_plot unnormalize_axis PointsToUnitCoords bilinearInterpPointTnf corr_to_matches nearestNeighPointTnf normalize_axis PointsToPixelCoords create_file_path str_to_bool save_checkpoint BatchTensorToVars collate_custom Softmax1D expand_dim AffineTnf AffineGridGen view model size mean permute normalize max format backward print step zero_grad capitalize loss_fn numpy enumerate len Variable size contiguous conv3d half get_device HalfTensor zeros range cuda is_cuda cat seed get set_num_threads put collate_fn get pin_memory_batch isinstance put is_tensor list sum isinstance Sequence new zip _new_shared Mapping Mapping is_tensor isinstance Sequence ne float size mean pow expand_as zeros le sum range data list PointsToUnitCoords size pck bilinearInterpPointTnf numpy range PointsToPixelCoords expand_as size max view tuple div fmod unsqueeze append max range cat isinstance Variable size add expand div unsqueeze cuda is_cuda show uint8 view Variable astype add imshow cuda is_cuda set_major_locator NullLocator set_axis_off margins subplots_adjust savefig list view size softmax linspace expand_as meshgrid max range view min pow sqrt unsqueeze cat int view isinstance Variable toidx size multrows abs sqrt unsqueeze long topoint sum cuda is_cuda clone normalize_axis expand_as unnormalize_axis clone expand_as dirname makedirs is_tensor Mapping isinstance exp unsqueeze join str basename copyfile dirname save makedirs size list | # Neighbourhood Consensus Networks  ## About This is the implementation of the paper "Neighbourhood Consensus Networks" by I. Rocco, M. Cimpoi, R. Arandjelović, A. Torii, T. Pajdla and J. Sivic. For more information check out the project [[website](http://www.di.ens.fr/willow/research/ncnet/)] and the paper on [[arXiv](https://arxiv.org/abs/1810.10510)]. ## Getting started ### Dependencies The code is implemented using Python 3 and PyTorch 0.3. All dependencies should be included in the standard Anaconda distribution. ### Getting the datasets The PF-Pascal dataset can be downloaded and unzipped by browsing to the `datasets/pf-pascal/` folder and running `download.sh`. | 2,381 |
ignacio-rocco/weakalign | ['semantic correspondence'] | ['End-to-end weakly-supervised semantic alignment'] | util/train_test_fn.py data/synth_dataset.py data/caltech_dataset.py image/normalization.py data/download_datasets.py geotnf/point_tnf.py data/weak_dataset.py model/loss.py geotnf/flow.py util/eval_util.py data/pascal_parts_dataset.py util/py_util.py model/cnn_geometric_model.py options/options.py util/torch_util.py geotnf/transformation.py demo.py train_strong.py eval.py data/tss_dataset.py train_weak.py data/pf_dataset.py util/dataloader.py affTpsTnf process_epoch inlier_score_function loss_fun process_epoch CaltechDataset download_caltech download_TSS download_and_uncompress download_pascal download_pascal_parts download_PF_pascal download_PF_willow PascalPartsDataset PFPascalDataset PFDataset SynthDataset TSSDataset ImagePairDataset th_sampling_grid_to_np_flow write_flo_file read_flo_file warp_image np_flow_to_th_sampling_grid unnormalize_axis PointTnf PointsToUnitCoords normalize_axis PointsToPixelCoords SynthPairTnf ComposedGeometricTnf SynthTwoStageTnf GeometricTnf AffineGridGen TpsGridGen SynthTwoStageTwoPairTnf SynthTwoPairTnf AffineGridGenV2 normalize_image NormalizeImageDict CNNGeometric FeatureExtraction FeatureRegression featureL2Norm FeatureCorrelation TwoStageCNNGeometric WeakInlierCount TransformedGridLoss TwoStageWeakInlierCount ArgumentParser DataLoaderIter default_collate DataLoader _worker_loop ExceptionWrapper pin_memory_batch _pin_memory_loop poly_str_to_mask pck_metric obj_ptr flow_metrics pck point_dist_metric inlier_count localization_error intersection_over_union mean_dist poly_to_mask area_metrics pascal_parts_metrics compute_metric label_transfer_accuracy create_file_path str_to_bool save_checkpoint BatchTensorToVars collate_custom Softmax1D expand_dim test_fun_strong train_fun_strong test_fun_weak print_train_progress train_fun_weak unsqueeze cat grid_sample GeometricTnf format backward model print step zero_grad capitalize loss_fn batch_preprocessing_fn enumerate len inliersComposed inliersAffine mean inlier_score_function model endswith print extractall write close dirname open ZipFile exists makedirs print join basename download_and_uncompress print join basename download_and_uncompress print join basename download_and_uncompress print join basename download_and_uncompress join basename download_and_uncompress print rename print join basename download_and_uncompress print close float32 int32 resize fromfile open tofile astype float32 close array open uint8 grid_sample Variable astype unsqueeze np_flow_to_th_sampling_grid list concatenate Variable unsqueeze meshgrid cuda range normalize_axis unnormalize_axis list concatenate squeeze meshgrid numpy range clone normalize_axis expand_as unnormalize_axis clone expand_as isinstance Variable size add expand div unsqueeze cuda is_cuda expand_as seed get set_num_threads put collate_fn get pin_memory_batch isinstance put is_tensor list sum isinstance Sequence new zip _new_shared Mapping Mapping is_tensor isinstance Sequence model max str list inlier_count range flatnonzero format size eval batch_tnf keys enumerate minimum int isinstance print category zeros metric_fun flow_output_dir len ne float size mean pow expand_as zeros le sum range ne size mean pow div expand_as zeros sum range data PointTnf list PointsToUnitCoords size numpy affPointTnf range mean_dist tpsPointTnf PointsToPixelCoords list inliersComposed size TwoStageWeakInlierCount numpy range data PointTnf list PointsToUnitCoords size pck pck_alpha numpy affPointTnf range tpsPointTnf PointsToPixelCoords int theta_to_sampling_grid grid_sample pck_metric Variable size transpose unsqueeze intersection_over_union numpy cuda range localization_error unsqueeze linspace cuda poly_str_to_mask view intersection_over_union affPointTnf meshgrid range cat label_transfer_accuracy th_sampling_grid_to_np_flow grid_sample size int PointTnf pointsToGrid Variable numpy tpsPointTnf unsqueeze linspace cuda view write_flo_file create_file_path affPointTnf meshgrid range cat th_sampling_grid_to_np_flow size int PointTnf join pointsToGrid Variable numpy flow_output_dir tpsPointTnf polygon zeros Variable fromstring unsqueeze poly_to_mask cuda mul float sum int list obj_ptr astype where mean meshgrid abs range list min where meshgrid max range dirname makedirs is_tensor Mapping isinstance exp unsqueeze join basename copyfile dirname save makedirs size list format model pair_generation_tnf backward print zero_grad loss_fn train step enumerate len format model pair_generation_tnf print eval loss_fn enumerate model zero_grad print_train_progress FeatureCorrelation iter next sum FeatureExtraction format batch_tnf TpsGridRegularityLoss enumerate FeatureRegression backward print loss_fn train step tgrl len format model print next eval batch_tnf iter loss_fn sum enumerate print format | # End-to-end weakly-supervised semantic alignment  ## About This is the implementation of the paper "End-to-end weakly-supervised semantic alignment" by I. Rocco, R. Arandjelović and J. Sivic. For more information check out the project [[website](http://www.di.ens.fr/willow/research/weakalign/)] and the paper on [[arXiv](https://arxiv.org/abs/1712.06861)]. ## Getting started ### Dependencies The code is implemented using Python 3 and PyTorch 0.2. All dependencies are included in the standard Anaconda distribution. ### Training The code includes scripts for pre-training the models with strong supervision (`train_strong.py`) as proposed in [our previous work](http://www.di.ens.fr/willow/research/cnngeometric/), as well as to fine-tune the model using weak supervision (`train_weak.py`) as proposed in this work. | 2,382 |
ihdia/instance-segmentation-v1 | ['optical character recognition', 'instance segmentation', 'semantic segmentation'] | ['Indiscapes: Instance Segmentation Networks for Layout Parsing of Historical Indic Manuscripts'] | main/doc/app.py main/doc/modeltest.py main/doc/pyimagesearch/searcher.py mrcnn/config.py main/doc/metrics.py mrcnn/model.py main/doc/pyimagesearch/colordescriptor.py mrcnn/parallel_model.py main/doc/regions_detector.py main/doc/train.py mrcnn/visualize.py mrcnn/utils.py load_model index upload_file add_header background_process_test InferenceConfig InferenceConfig runtest get_ax DefaultInferenceConfig RegionsDetector Config train Dataset ColorDescriptor Searcher Config fpn_classifier_graph MaskRCNN compose_image_meta rpn_bbox_loss_graph norm_boxes_graph compute_backbone_shapes rpn_class_loss_graph log DetectionTargetLayer trim_zeros_graph log2_graph parse_image_meta parse_image_meta_graph data_generator rpn_graph identity_block BatchNorm build_fpn_mask_graph load_image_gt build_rpn_targets resnet_graph unmold_image PyramidROIAlign apply_box_deltas_graph denorm_boxes_graph make_mask_weight generate_random_rois detection_targets_graph build_detection_targets overlaps_graph mrcnn_bbox_loss_graph focal_loss_fixed conv_block batch_pack_graph ProposalLayer smooth_l1_loss clip_boxes_graph mrcnn_class_loss_graph mrcnn_mask_loss_graph mold_image build_rpn_model DetectionLayer refine_detections_graph ParallelModel build_model compute_ap norm_boxes compute_recall apply_box_deltas compute_overlaps compute_iou compute_matches2 resize resize_image box_refinement_graph generate_pyramid_anchors mold_mask generate_anchors compute_ap2 compute_per_region_ap compute_overlaps_masks compute_ap_range unmold_mask denorm_boxes download_trained_weights compute_overlaps_masks2 non_max_suppression minimize_mask resize_mask extract_bboxes trim_zeros compute_matches batch_slice expand_mask box_refinement Dataset display_differences draw_box display_images draw_rois draw_boxes apply_mask random_colors display_instances display_table display_weight_stats plot_overlaps plot_precision_recall display_top_masks format image_ids class_names print prepare load_weights load_data get_default_graph InferenceConfig Dataset len join secure_filename save filename flash imread subplots str class_names COLOR_BGR2RGB print detect display_instances savefig resize_image get_ax range cvtColor len print prepare load_data dataset Dataset array ljust size print BACKBONE callable str str conv_block identity_block range stack minimum concat maximum set_shape split minimum reshape maximum tile expand_dims split concat reduce_max boolean_mask MASK_SHAPE crop_and_resize gather box_refinement_graph round trim_zeros_graph ROI_POSITIVE_RATIO transpose squeeze pad cast expand_dims range USE_MINI_MASK overlaps_graph cond int TRAIN_ROIS_PER_IMAGE float32 greater maximum int32 split minimum apply_box_deltas_graph reshape clip_boxes_graph concat gather map_fn DETECTION_MAX_INSTANCES stack gather_nd DETECTION_MIN_CONFIDENCE pad set_intersection expand_dims argmax BBOX_STD_DEV Input rpn_graph int_shape less abs cast switch constant not_equal squeeze where mean sparse_categorical_crossentropy gather_nd cast int32 equal IMAGES_PER_GPU batch_pack_graph switch constant smooth_l1_loss squeeze where mean gather_nd cast int32 sum equal reduce_sum sparse_softmax_cross_entropy_with_logits cast gather argmax switch constant reshape smooth_l1_loss mean int64 stack cast gather_nd gather max T arange zeros_like reshape distance_transform_edt astype meshgrid array enumerate ones_like clip zeros_like where equal switch constant reshape transpose mean shape int64 stack cast gather_nd gather binary_crossentropy uint8 minimize_mask compose_image_meta extract_bboxes load_mask zeros astype randint resize_image shape warning resize_mask MINI_MASK_SHAPE load_image bool fliplr augment_image to_deterministic int ROI_POSITIVE_RATIO concatenate resize astype TRAIN_ROIS_PER_IMAGE compute_iou choice MASK_SHAPE int32 box_refinement USE_MINI_MASK zeros argmax range sum zip ones compute_overlaps choice RPN_TRAIN_ANCHORS_PER_IMAGE zeros argmax amax len int sort min hstack randint zeros max range split image_ids arange IMAGE_SHAPE compute_backbone_shapes RPN_ANCHOR_RATIOS generate_pyramid_anchors BACKBONE_STRIDES MAX_GT_INSTANCES shape expand_dims load_image_gt build_rpn_targets astype shuffle copy choice generate_random_rois build_detection_targets RPN_ANCHOR_SCALES mold_image RPN_ANCHOR_STRIDE float32 extend zeros len list array boolean_mask reduce_sum cast bool abs append range constant concat float32 cast split constant concat float32 cast split reset_default_graph Input zeros array range minimum maximum zeros range compute_iou astype float32 jaccard_similarity_score append sum array range T astype float32 dot sum astype delete float32 compute_iou append astype float32 stack cast float32 log astype float32 log dtype min pad resize randint float round max pad astype resize zeros bool range astype resize zeros bool range zeros bool astype resize arange concatenate reshape flatten sqrt meshgrid array append generate_anchors range len ones trim_zeros compute_overlaps_masks range len ones trim_zeros compute_overlaps_masks2 range len arange concatenate cumsum astype float32 compute_matches2 maximum sum range len arange concatenate cumsum compute_matches astype float32 maximum sum range len sorted compute_ap2 reshape compute_matches compute_ap_range union1d mean intersect1d unique append sum array range len format arange compute_ap2 print mean append compute_overlaps set argmax max len list graph_fn zip append range len print array array show subplot uint8 axis astype imshow title figure zip len list shuffle range where axis show set_title apply_mask imshow append find_contours range set_xlim astype copy zeros uint8 Polygon print text add_patch Rectangle randint fliplr set_ylim compute_matches display_instances concatenate len subplots arange rand axis Line2D unmold_mask shape title apply_mask imshow format set_xlim astype copy enumerate add_line print text add_patch Rectangle int32 set_ylim len format arange display_images unique append sum range format subplots set_title plot set_xlim set_ylim list format arange product yticks text xlabel tight_layout ylabel imshow figure xticks max range len subplots axis Line2D random_colors set_title apply_mask imshow find_contours range set_xlim astype copy zeros add_line uint8 Polygon text add_patch Rectangle int32 randint fliplr set_ylim HTML display get_trainable_layers name weights display_table append get_weights enumerate | # Region-segmentation This system takes in an input document image and outputs a image with region labels overlaid on top of the image. It also generates a json which can then be loaded as a project in the annotator tool for further refinement. We also provide instructions for training the model. ### Install prerequisites ```bash python3 -m pip install -r requirements.txt ``` ### To run Inference on your own image 1. Download the pretrained model from this [link](https://drive.google.com/open?id=1EV0mFrRDCQ9ZHHgbVbjxSbWLhJ4XoBb7) 2. Place the `pretrained_model_indiscapes.h5` file in the root folder 3. Start the GUI application (`main/doc/app.py`) | 2,383 |
ihollywhy/SPAGAN | ['graph attention'] | ['SPAGAN: Shortest Path Graph Attention Network'] | utils.py models_spagat.py train_spagat.py layers_spagat.py SpGraphAttentionLayer SpaGAT train test normalize_adj sparse_mx_to_torch_sparse_tensor gen_pathm sample_mask accuracy load_data_orggcn parse_index_file preprocess_adj graph_tool_apsp encode_onehot normalize single_gen_path load_pathm time format model backward print nll_loss zero_grad accuracy item step format model print nll_loss accuracy eval item get list map set array diags flatten dot sum array sum type_as double data Size astype float32 from_numpy shape int64 append int strip open zeros diags flatten coo_matrix sum array normalize_adj eye from_dict_of_lists tuple parse_index_file vstack preprocess_adj max list todense FloatTensor tolist normalize range format lil_matrix LongTensor adjacency_matrix tolil sort sparse_mx_to_torch_sparse_tensor min sample_mask print zeros array len int list LongTensor concatenate FloatTensor reshape Size min array append keys len list cuda keys data a time format print Graph reshape tolist new_edge_property shortest_distance set append add_edge_list add_vertex keys range data single_gen_path time format setdiag print _values min _indices shape graph_tool_apsp load_npz max range enumerate | ## SPAGAN in PyTorch This is a PyTorch implementation of the paper "SPAGAN: Shortest Path Graph Attention Network" ### Prerequisites We prefer to create a new conda environment to run the code. #### PyTorch Version >= 1.0.0 #### PyTorch-geometric We use [torch_geometric](https://github.com/rusty1s/pytorch_geometric), [torch_scatter](https://rusty1s.github.io/pytorch_scatter/build/html/index.html) and [torch_sparse](https://github.com/rusty1s/pytorch_sparse) as backbone to implement the path attention mechanism. Please follow the [official website](https://rusty1s.github.io/pytorch_geometric/build/html/notes/installation.html) to install them. #### networkx We use [networkx](https://networkx.github.io/) to load the graph dataset. | 2,384 |
ihungalexhsu/NIESR-code | ['speech recognition'] | ['NIESR: Nuisance Invariant End-to-end Speech Recognition'] | seq2seq.py dataloader.py dataset.py utils.py main.py uai_seq2seq.py model.py get_data_loader _collate_fn PickleDataset Decoder pBLSTM Encoder disentangle_nuisance addnoiselayer inverse_pBLSTM disentangle_clean E2E AttLoc Seq2seq UAI_seq2seq remove_pad_eos cc_model to_sents _seq_mask char_list_to_str trim_representation adjust_learning_rate cc Logger calculate_cer pad_list to_gpu ind2character append sort pad_sequence max range fill_ len device is_available device LongTensor new cc IntTensor pad_list size expand from_numpy array cuda expand_as long is_cuda load_state_dict state_dict append next char_list_to_str ind2character append append join eval zip item | # NIESR-code Code for NIESR: Nuisance Invariant End-to-end Speech Recognition # Requirements - python 3.7 - pytorch 1.1 - editdistance (pip install editdistance) - PyYAML (pip install pyyaml) - tensorboardX | 2,385 |
iikka-v/ML-NDT | ['data augmentation'] | ['Augmented Ultrasonic Data for Machine Learning'] | src/train.py src/inference.py DebugCallback data_generator concatenate loadtxt reshape astype shuffle empty | # ML-NDT Data and code for training deep convolutional neural network to detect cracks in phased-array ultrasonic data. Please refer to https://arxiv.org/abs/1903.11399 for details. ## Contents The directory "data" contains ultrasonic data sets, containing various flaws. Each batch file is named with an UUID and contains * .bins file, that contains the raw data * .meta file, that documents the raw data format, this is always UInt16, 256 x 256 x 100 * .jsons file, that contains a json-formatted meta-data for each binary file. This includes the locations of all flaws, source flaw size and "equivalent size" * .labels file, that contain tab-separated data for flaw existence (0/1) and equivalent flaw size. The directory "src" contains python code to train a deep CNN using the data provided. Use "./train.py" to run. | 2,386 |
iitmnlp/Dialogue-Evaluation-with-BERT | ['dialogue evaluation'] | ['Improving Dialog Evaluation with a Multi-reference Adversarial Dataset and Large Scale Pretraining'] | baselines/ADEM/vhred/src/train.py baselines/ADEM/vhred/src/tfidf_retrieval.py baselines/BERT+DNN/utils.py deb/optimization.py baselines/ADEM/vhred/src/model.py baselines/ADEM/experiments.py baselines/ADEM/vhred/src/adam.py baselines/ADEM/vhred/src/eval_model_hred_fair.py baselines/ADEM/pretrain.py baselines/ADEM/vhred/src/vhred_compute_dialogue_embeddings.py baselines/RUBER/unreference_score.py baselines/BERT+DNN/train_unreference.py baselines/BERT+DNN/unreference_score.py deb/create_tfrecord_data_from_json.py baselines/RUBER/utils.py baselines/ADEM/vhred/src/eval_model_hred.py deb/modeling.py deb/tokenization.py deb/extract_features.py baselines/ADEM/vhred/src/hred_encoder.py deb/deb.py baselines/ADEM/vhred/src/utils.py baselines/ADEM/vhred/src/vhred_state.py deb/optimization_test.py baselines/RUBER/train_unreference.py baselines/ADEM/vhred/src/vhred_retrieval.py baselines/ADEM/train.py baselines/ADEM/preprocess.py baselines/ADEM/apply_bpe.py deb/modeling_test.py baselines/ADEM/vhred/src/unfair_eval_model_hred.py baselines/ADEM/vhred/src/vhred_dialog_encdec.py deb/tokenization_test.py baselines/ADEM/vhred/src/numpy_compat.py baselines/ADEM/models.py create_parser BPE get_pairs encode configurations ADEM DataLoader Preprocessor VHRED parse_args create_experiment Adam sharedX get_score get_gtresponse clean_list compute_model_embeddings LinearEvalModel flatten make_plot idxs_to_strs calc_system_scores strs_to_idxs fixmodelmap2 show_overlap_scores extend_by_four shuffle_dataset preprocess_data set_shared_variable get_len_features preprocess_tweet compute_separate_pca make_line_plot get_auxiliary_features compute_pca get_context filter_modelmap test apply_test_filter compute_init_values short_shuffle combine_contextids apply_train_filter compute_liu_pca construct_filter get_modelresponse long_shuffle get_twitter_data load_data clean_str correlation train get_score get_gtresponse clean_list compute_model_embeddings LinearEvalModel flatten make_plot idxs_to_strs calc_system_scores strs_to_idxs fixmodelmap2 show_overlap_scores extend_by_four shuffle_dataset preprocess_data set_shared_variable get_len_features preprocess_tweet compute_separate_pca make_line_plot get_auxiliary_features compute_pca get_context filter_modelmap test apply_test_filter compute_init_values short_shuffle combine_contextids apply_train_filter compute_liu_pca construct_filter get_modelresponse long_shuffle get_twitter_data load_data clean_str correlation train UtteranceEncoder EncoderDecoderBase add_to_params DCGMEncoder DialogEncoder DialogLevelLatentEncoder DialogLevelRollLeft DialogEncoderDecoder DialogDummyEncoder Model preprocess_tweet strs_to_idxs mat_vector_2norm bpestrs_to_strs sanity_check brute_force_search flatten_list process_dialogues idxs_to_strs idxs_to_bpestrs mat_vector_2norm_squared tfidf_retrieval load Unbuffered save main init_timings parse_args get_score get_gtresponse compute_model_embeddings LinearEvalModel flatten make_plot idxs_to_strs calc_system_scores strs_to_idxs fixmodelmap2 show_overlap_scores extend_by_four preprocess_data set_shared_variable get_len_features preprocess_tweet compute_separate_pca make_line_plot get_auxiliary_features compute_pca get_context filter_modelmap test apply_test_filter compute_init_values combine_contextids apply_train_filter compute_liu_pca construct_filter get_modelresponse get_twitter_data load_data correlation train Maxout OrthogonalInit GrabProbs LayerNormalization sharedX Adam stable_log Adagrad RMSProp SoftMax NormalInit3D FeedforwardBatchNormalization RecurrentBatchNormalization NormalInit BatchedDot Adadelta ConvertTimedelta UniformInit DPrint NormalizationOperator compute_encodings parse_args main Timer UtteranceEncoder EncoderDecoderBase add_to_params TwoLayerMLP DCGMEncoder DialogEncoder DialogLevelRollLeft DialogEncoderDecoder DialogDummyEncoder LinearCombination UtteranceDecoder DialogLevelLatentGaussianEncoder preprocess_tweet strs_to_idxs compute_separate_pca mat_vector_2norm compute_model_embeddings compute_pca test_model sanity_check brute_force_search flatten_list process_dialogues idxs_to_strs transform_data_points scale_points prototype_twitter_HRED prototype_test prototype_ubuntu_LSTM prototype_twitter_lstm prototype_twitter_VHRED prototype_twitter_VHRED_StandardBias prototype_twitter_VHRED_Large_SkipConnections prototype_twitter_Gauss_VHRED_NormOp prototype_twitter_HRED_NormOp prototype_state prototype_ubuntu_VHRED prototype_twitter_HRED_StandardBias prototype_test_variational prototype_twitter_HRED_Large prototype_ubuntu_HRED validation count_parameters test main train BERT_RUBER_unrefer get_batch create_data load_best_model preprocess process_file validation evaluate count_parameters test main train RUBER_unrefer get_batch create_data load_best_model load_special_model tokenizer preprocess Vocab load_embedding process_file load_word2vec make_embedding_matrix TrainingInstance create_int_feature create_instances_from_document create_training_instances write_instance_to_example_files call_fns create_float_feature truncate_seq_pair create_masked_lm_predictions get_masked_lm_output gather_indexes get_next_sentence_output input_fn_builder _decode_record main model_fn_builder read_examples InputFeatures input_fn_builder InputExample _truncate_seq_pair convert_examples_to_features main model_fn_builder embedding_lookup reshape_from_matrix dropout assert_rank reshape_to_matrix layer_norm_and_dropout get_shape_list gelu create_initializer BertConfig attention_layer get_activation layer_norm embedding_postprocessor transformer_model create_attention_mask_from_input_mask get_assignment_map_from_checkpoint BertModel BertModelTest create_optimizer AdamWeightDecayOptimizer OptimizationTest validate_case_matches_checkpoint convert_by_vocab FullTokenizer BasicTokenizer convert_ids_to_tokens WordpieceTokenizer printable_text convert_tokens_to_ids load_vocab whitespace_tokenize convert_to_unicode _is_whitespace _is_control _is_punctuation TokenizationTest add_argument ArgumentParser add set get_pairs endswith tuple min extend index append expt_dir test_data vhrd_data dev_data train_data mode add_argument ArgumentParser makedirs floatX items list sharedX sqr get_value sqrt append append float replace segment strip append append replace append preprocess_tweet append preprocess_tweet append preprocess_tweet append int sort append sort append flatten append list range len append list range len list keys list keys append dot range len int time str print extend_by_four append range len transform zeros range zeros PCA range fit_transform PCA transform zeros fit_transform range int str build_decoder_encoding print build_encoder_function bs compute_encodings ceil float range append len append range len append zeros range append clean_str plot xlabel ylabel clf savefig unique xlabel clf savefig plot append range len mean append array range construct_filter len pearsonr spearmanr append function squared_error fmatrix LinearEvalModel lscalar get_output_train open str list l1_regularization len tensor3 append set_shared_variable sum range dump make_line_plot grad l2_regularization float get_params train_model int time get_output print makedirs float32 get_output_val correlation zeros fvector set_params function print make_plot correlation set_shared_variable get_output_new list shuffle append range len get_len_features list strs_to_idxs long_shuffle print compute_model_embeddings get_auxiliary_features DialogEncoderDecoder flatten short_shuffle prototype_state append zeros array range len get_test_len_loss train_model2 squared_len_error hinge_loss_len l1_regularization_F_I train_model3 append append append join append replace dot range len T dot sqrt append range append T dot range norm infty append range len T toarray print dot shape append mat_vector_2norm_squared type range len SIGINT time dump format savez SIG_IGN print state signal open SIGINT time format SIG_IGN print signal reinitialize_latent_variable_parameters train_batch state get_value add_latent_gaussian_per_utterance warn save_every_valid_iteration params save exists open eval_batch build_nce_function basicConfig list str eval_grads BeamSampler append DialogEncoderDecoder next sum range format debug astype build_eval_grads build_train_function start resume pformat build_eval_function get_train_iterator sample float load items time auto_restart collect print ConvertTimedelta reshape RandomSampler dict reinitialize_decoder_parameters isfile zeros add_random_variables_to_batch rng init_timings zeros make_plot normal list sharedX name sqr get_value float32 OrderedDict sqrt keys list sharedX name sqr get_value OrderedDict sqrt keys list sharedX name sqr get_value OrderedDict sqrt keys minimum int normal permutation svd zeros range flatten reshape minimum int normal permutation zeros range int NormalInit zeros range exp max dimshuffle minimum sum cast dimshuffle ones_like sum dimshuffle dimshuffle dimshuffle batched_dot list hasattr reverse_utterances print eos_sym model_compute_encoder_state bs zeros range len build_decoder_encoding prototype_state len compute_encodings ceil build_encoder_function readlines bs words_to_indices int enumerate split time dump open T list mat_vector_2norm dot zip append argmax array range len prototype_state prototype_state prototype_state prototype_state prototype_state prototype_state prototype_state prototype_state prototype_state prototype_state prototype_state prototype_state prototype_state prototype_state print numel named_parameters PrettyTable add_row criterion backward clip_grad_norm_ zero_grad step from_numpy parameters is_available BCELoss cuda net enumerate criterion from_numpy eval is_available float BCELoss cuda net enumerate batch_size set_description floor round cuda Adam BERT_RUBER_unrefer inf get_batch close test is_available validation tqdm parameters exp_dir train BERT_RUBER_unrefer get_batch batch_size load_best_model astype confusion_matrix exp_dir eval savetxt array int32 is_available BCELoss cuda mode int join print load_state_dict float listdir split rstrip arange readlines shuffle append float array len join strip write readlines close preprocess loads enumerate open from_pretrained list sorted concatenate print zip len save transpose format flush accuracy_score transpose tolist savetxt format astype print pointbiserialr extend confusion_matrix exp_dir mode int32 array len validation seed load_best_model RUBER_unrefer flush get_vocab_size evaluate hidden_size load seed get_vocab_size get_batch batch_size print count_parameters load_best_model test exp_dir RUBER_unrefer is_available cuda hidden_size open load_state_dict seed close get_vocab_size dump exists print append range open print squeeze Vocab segment_ids tokens list convert_tokens_to_ids masked_lm_labels SerializeToString OrderedDict Example append masked_lm_positions value create_int_feature TFRecordWriter close info keys enumerate join write create_float_feature len Feature Feature list get_bucket print extend Client create_instances_from_document float keys range len TrainingInstance extend append create_masked_lm_predictions len list print pop len print create_training_instances write_instance_to_example_files gather_indexes reshape get_shape_list range gather parse_single_example to_int32 list keys do_eval TPUClusterResolver TPUEstimator set_verbosity output_dir do_train from_json_file model_fn_builder savez_compressed eval_batch_size tpu_name bert_config_file shape predict asarray input_fn_builder Glob MakeDirs info INFO join extend PER_HOST_V2 train_batch_size RunConfig input_type_ids input_mask append unique_id input_ids join text_b InputFeatures convert_tokens_to_ids _truncate_seq_pair tokenize info append unique_id text_a enumerate len pop len FullTokenizer read_examples convert_examples_to_features input_file pow tanh sqrt pi lower name group OrderedDict match list_variables layer_norm dropout one_hot reshape get_shape_list matmul gather expand_dims get_variable one_hot reshape get_shape_list layer_norm_and_dropout matmul assert_less_equal get_variable ones reshape get_shape_list float32 cast dense dropout multiply get_shape_list reshape transpose float32 matmul transpose_for_scores expand_dims sqrt cast softmax float reshape_to_matrix int get_shape_list append reshape_from_matrix range reshape_to_matrix as_list assert_rank name shape append enumerate reshape ndims get_shape_list name integer_types ndims isinstance trainable_variables list constant get_or_create_global_step gradients clip_by_global_norm group float32 apply_gradients cast int32 zip polynomial_decay CrossShardOptimizer AdamWeightDecayOptimizer match group isinstance PY3 PY2 isinstance PY3 PY2 OrderedDict append strip split category category startswith category ord | # Dialogue-Evaluation-with-BERT This repository contains the code for the proposed DEB model and the baselines it was compared with, along with the proposed DailyDialog++ dataset. (The link to the corresponding paper, which has been accepted at TACL will be updated here soon.) For any questions, please contact [email protected] and [email protected] | 2,387 |
imanneelmaachi/Parkinson-disease-detection-and-severity-prediction-from-gait | ['gait recognition'] | ['Deep 1D-Convnet for accurate Parkinson disease detection and severity prediction from gait'] | src/train.py src/data_utils.py src/algo.py src/results.py multiple_cnn1D multiple_cnn1D5_level conv1D_full Data Results Results_level train ablation_study train_severity train_classifier print RMSprop Model summary Input compile conv1D_full concatenate print tolist rmsprop Model summary append array range compile Nadam conv1D_full concatenate print tolist Model summary append array range compile Nadam X_train arange EarlyStopping y_val CSVLogger split load_weights X_val y_train ModelCheckpoint compile fit separate_fold arange validate_patient y_val delete input_data exp_name str Data strftime multiple_cnn1D Results range X_val join to_json print count_val output train makedirs separate_fold validate_patient y_val input_data exp_name str Data strftime multiple_cnn1D Results range X_val join to_json print count_val output train makedirs separate_fold arange validate_patient y_val input_data exp_name str Data multiple_cnn1D5_level strftime Results_level range X_val join to_json print count_val output train makedirs | Deep 1D-Convnet for accurate Parkinson disease detection and severity prediction from gait - This is the official code release for the paper : El Maachi, I., Bilodeau, G.-A., Bouachir, W., Deep 1D-Convnet for accurate Parkinson disease detection and severity prediction from gait, Expert Systems With Applications, Volume 143, 2020 You can access it via arXiv: https://arxiv.org/abs/1910.11509 For any questions or queries, please contact Imanne El Maachi: [email protected] Prerequisites - - Python 3.7 - CPU or NVIDIA GPU + CUDA CuDNN (the algorithm was developed with Cuda and CudNN) | 2,388 |
imannema/cv_style_transfer | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | code/transfer.py deprocess_image Evaluator gram_matrix eval_loss_and_grads content_loss total_variation_loss preprocess_image style_loss expand_dims preprocess_input img_to_array load_img reshape transpose astype dot transpose batch_flatten permute_dimensions gram_matrix square reshape astype f_outputs | # Through the Covid-19 Lens Project Team Members: - Iman Nematollahi, [email protected] - Soon Gi Shin, [email protected] - Justin Lee, [email protected] - Jaskaranpal Singh, [email protected] - Dan Ngo, [email protected] ## Abstract Since the rise of the COVID-19 pandemic, there has been a significant change in the way that many people view the world. Though there are those who believe that the pandemic is nothing but a hoax, a greater majority has become extremely fearful of the potential threat that the coronavirus poses. Being in quarantine has exacerbated much of these fears as well, as many are finding it to be more and more difficult to leave the house without feeling as if they are risking their lives. As such, for our project, we wanted to capture a bit of this feeling of uneasiness to illustrate to future generations what it was like to live during these times. We will do this mainly through the use of style transfer techniques, which will take the style of images of the coronavirus and apply them to images of things that one might see in his/her everyday life. We hope that people later on will be able to study and reference the narrative that we intend to create here in this project as a means of gaining insight into life during the COVID-19 pandemic. | 2,389 |
imatge-upc/egocentric-2017-lta | ['image retrieval'] | ['Semantic Summarization of Egocentric Photo Stream Events'] | src/Saliency/deep/get_params.py src/Saliency/deep/saliency.py get_params main data join dump getopt Classifier print insert glob exit set_mode_gpu set_mode_cpu shape savemat open get_params predict makedirs | # Semantic Summarization of Egocentric Photo Stream Events | ![Xavier Giro-i-Nieto][XavierGiro-photo] | ![Aniol Lidon][AniolLidon-photo] | ![Marc Bolaños][MarcBolanos-photo] | ![Maite Garolera][MaiteGarolera-photo] | ![Mariella Dimiccoli][MariellaDimiccoli-photo] | ![Petia Radeva][PetiaRadeva-photo] | |:-:|:-:|:-:|:-:|:-:|:-:| | [Xavier Giro-i-Nieto][XavierGiro-web] | Aniol Lidon | [Marc Bolaños][MarcBolanos-web] | [Maite Garolera][MaiteGarolera-web] | [Mariella Dimiccoli][MariellaDimiccoli-web] | [Petia Radeva][PetiaRadeva-web] | [XavierGiro-photo]: ./authors/XavierGiro.jpg "Xavier Giro-i-Nieto" [AniolLidon-photo]: ./authors/AnioLidon.jpg "Aniol Lidon" [MarcBolanos-photo]: ./authors/MarcBolanos.jpg "Marc Bolaños" [MarcCarne-photo]: ./authors/MarcCarne.jpg "Marc Carné" [MaiteGarolera-photo]: ./authors/MaiteGarolera-160x160.jpg "Maite Garolera" [MariellaDimiccoli-photo]: ./authors/MariellaDimiccoli.jpg "Mariella Dimmicoli" | 2,390 |
imatge-upc/medical-2017-nipsw | ['medical image segmentation', 'active learning', 'semantic segmentation'] | ['Cost-Effective Active Learning for Melanoma Segmentation'] | segnet_train_predict.py segnet.py cnn_net.py utils.py CEAL.py train_loop ModelCNN build_model train predict uncertain_set certain_set predictions_max_class entropy_rank pseudo_label_error str format evaluate print len write range fit Sequential add Dense MaxPooling2D summary Conv2D Activation Flatten Dropout VGG16 layers output Model compile build_model print load_train_data astype mean summary ModelCheckpoint std fit load_test_data build_model print astype mean load_weights save std range zeros sum log len min max zeros range len range len | # CEAL Medical Image Segmentation Cost Effective Active Learning algorithm to train a Convolutional Neural Network for medical image segmentation | 2,391 |
imatge-upc/rsis | ['instance segmentation', 'semantic segmentation'] | ['Recurrent Neural Networks for Semantic Instance Segmentation'] | src/modules/vision.py src/coco/PythonAPI/pycocotools/__init__.py src/train.py src/dataloader/pascal.py src/utils/utils.py src/coco/PythonAPI/pycocotools/mask.py src/test.py src/coco/PythonAPI/pycocotools/cocoeval.py src/utils/objectives.py src/dataloader/dataset.py src/dataloader/pascal_precompute.py src/dataloader/transforms/utils.py src/utils/hungarian.py src/coco/PythonAPI/pycocotools/coco.py src/dataloader/cityscapes.py src/args.py src/dataloader/pascalplus_gen.py src/modules/model.py src/modules/clstm.py src/eval_cityscapes.py src/dataloader/dataset_utils.py src/coco/PythonAPI/setup.py src/eval.py src/dataloader/leaves.py src/plot_curves.py src/dataloader/transforms/transforms.py src/eval_leaves.py get_parser create_annotation Evaluate resize_mask display_masks create_coco_object Evaluate extract_losses read_lines plot_curves_parser test init_dataloaders runIter trainIters COCO Params COCOeval encode decode area toBbox CityScapes MyDataset flip_crop sequence_palette resize_ get_dataset pascal_palette scale convert_from_color_segmentation LeavesDataset PascalVOC make_dir read_file write_file create_annotation precompute make_coco make_dir get_imnames RandomChoiceTranslate random_crop Rotate RandomTranslate RandomRotate RandomChoiceZoom AffineCompose RandomZoom Zoom Affine Translate RandomChoiceRotate RandomChoiceShear RandomAffine Shear RandomShear save_transform load_transform th_affine3d th_corrcoef th_pearsonr th_flatten th_c_flatten th_allclose th_random_choice th_iterproduct_like th_ones_like th_zeros_like th_gather_nd th_iterproduct th_uniform th_matrixcorr th_bilinear_interp2d th_nearest_interp2d th_trilinear_interp3d th_bc_flatten th_nearest_interp3d th_constant_like th_affine2d ConvLSTMCell RSIS FeatureExtractor ResNet50 ResNet101 ResNet34 VGG16 StableBalancedMaskedBCE match MaskedNLL softIoU MaskedNLLLoss softIoULoss MaskedBCELoss get_base_params get_skip_params merge_params outs_perms_to_cpu load_checkpoint make_dir check_parallel save_checkpoint init_visdom batch_to_var get_skip_dims get_optimizer set_defaults add_argument ArgumentParser decode Line2D max subplot ones squeeze set_autoscale_on imshow gca frPyObjects append range center_of_mass enumerate add_line text min dstack array reshape astype mask_th zoom dict join eval_split list pascal_dir dict COCO append enumerate split rstrip subplots arange plot set_title print extract_losses len set_xlabel set_ylabel savefig legend append float read_lines split decoder view size eval UpsamplingBilinear2d upsample_match maxseqlen append encoder range cat len get_classes batch_size Compose ToTensor get_dataset DataLoader Normalize imsize data mul iou_weight softIoU zero_grad unsqueeze mask_siou update_encoder cuda class_crit view ones squeeze from_numpy permute maxseqlen append encoder range use_gpu byte size use_class_loss mean upsample_match limit_seqlen_to float gt_maxseqlen stop_xentropy curriculum_learning use_stop_loss decoder backward Variable contiguous min sigmoid UpsamplingBilinear2d repeat match train step len state weight_decay_cnn runIter epoch_resume weight_decay FeatureExtractor transfer check_parallel DataParallel image save_checkpoint cuda get_optimizer visdom open heatmap list defaultdict get_skip_params num_classes transfer_from max_epoch load_state_dict MaskedNLLLoss append range use_gpu dump Visdom outs_perms_to_cpu synchronize make_dir mean RSIS flipud resume lr limit_seqlen_to init_visdom optim init_dataloaders batch_to_var Linear get_base_params join time curriculum_learning line enumerate best_val_loss print load_checkpoint reshape lr_cnn softIoULoss parameters model_name optim_cnn numpy MaskedBCELoss shape MyChosenDataset expand_dims size resize_ squeeze random_crop size copy from_numpy expand_dims fromiter reshape pascal_palette dtype float zoom mkdir rstrip astype join rstrip concatenate reshape zeros imread convert_from_color_segmentation join pascal_dir list create_annotation unique append zeros range len th_flatten LongTensor contiguous stride index_select mv bmm th_bilinear_interp2d th_nearest_interp2d size transpose contiguous unsqueeze repeat expand_as float th_iterproduct view LongTensor size stride add gather round long mul LongTensor view clamp size stride add floor gather long size contiguous th_trilinear_interp3d th_nearest_interp3d expand_as float mm th_flatten float round long th_flatten mul clamp size floor long mean norm sub dot clamp size mean t pow div sub expand_as mm diag norm mean div sub expand_as mm cat long arange isinstance mul cuda log clamp sum log sigmoid sum compute Munkres size tolist zeros numpy range OrderedDict items list bn1 layer1 requires_grad layer3 layer4 parameters modules append features layer2 conv1 range len append parameters range len range len Adam RMSprop SGD join dump model_name save open state_dict load join open Variable use_gpu float long line ones image maxseqlen heatmap range argmax size numpy view | # Recurrent Neural Networks for Semantic Instance Segmentation See the paper in arXiv [here](https://arxiv.org/pdf/1712.00617.pdf). ## Installation - Clone the repo: ```shell git clone https://github.com/imatge-upc/rsis.git ``` - Install requirements ```pip install -r requirements.txt``` - Install [PyTorch 0.2](http://pytorch.org/) (choose the whl file according to your setup): ```shell | 2,392 |
imatge-upc/sentiment-2015-asm | ['sentiment analysis'] | ['Diving Deep into Sentiment: Understanding Fine-tuned CNNs for Visual Sentiment Prediction'] | compute_cross_validation_accuracy.py | # Diving Deep into Sentiment: Understanding Fine-tuned CNNs for Visual Sentiment Prediction | ![Víctor Campos][VictorCampos-photo] | ![Amaia Salvador][AmaiaSalvador-photo] | ![Xavier Giro-i-Nieto][XavierGiro-photo] | ![Brendan Jou][BrendanJou-photo] | |:-:|:-:|:-:|:-:| | Víctor Campos | [Amaia Salvador](https://imatge.upc.edu/web/people/amaia-salvador) | [Xavier Giro-i-Nieto](https://imatge.upc.edu/web/people/xavier-giro) | [Brendan Jou](http://www.ee.columbia.edu/~bjou/) | [VictorCampos-photo]: https://raw.githubusercontent.com/imatge-upc/sentiment-2015-asm/master/figures/authors/VictorCampos.jpg "Víctor Campos" [AmaiaSalvador-photo]: https://raw.githubusercontent.com/imatge-upc/sentiment-2015-asm/master/figures/authors/AmaiaSalvador.jpg "Amaia Salvador" [XavierGiro-photo]: https://raw.githubusercontent.com/imatge-upc/sentiment-2015-asm/master/figures/authors/XavierGiro.jpg "Xavier Giro-i-Nieto" [BrendanJou-photo]: https://raw.githubusercontent.com/imatge-upc/sentiment-2015-asm/master/figures/authors/BrendanJou.png "Brendan Jou" A joint collaboration between: | ![logo-upc] | ![logo-etsetb] | ![logo-gpi] | ![logo-columbia] | ![logo-dvmmlab] | | 2,393 |
imatge-upc/sentiment-2017-imavis | ['sentiment analysis'] | ['From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction'] | compute_cross_validation_accuracy.py compare_and_plot_training_logs.py sentiment_maps/compose_sentiment_maps.py sentiment_maps/generate_sentiment_maps.py | # From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction ## Image and Vision Computing | ![Víctor Campos][VictorCampos-photo] | ![Brendan Jou][BrendanJou-photo] | ![Xavier Giro-i-Nieto][XavierGiro-photo] | |:-:|:-:|:-:| | [Víctor Campos](https://www.linkedin.com/in/victor-campos-camunez) | [Brendan Jou](http://www.ee.columbia.edu/~bjou/) | [Xavier Giro-i-Nieto](https://imatge.upc.edu/web/people/xavier-giro) | [VictorCampos-photo]: ./figures/authors/VictorCampos.jpg "Víctor Campos" [BrendanJou-photo]: ./figures/authors/BrendanJou.png "Brendan Jou" [XavierGiro-photo]: ./figures/authors/XavierGiro.jpg "Xavier Giro-i-Nieto" A joint collaboration between: | ![logo-bsc] | ![logo-upc] | ![logo-gpi] | ![logo-columbia] | ![logo-dvmmlab] | | 2,394 |
imran3180/depth-map-prediction | ['depth estimation', 'monocular depth estimation', 'superpixels'] | ['Depth Map Prediction from a Single Image using a Multi-Scale Deep Network', 'Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture'] | image_helper.py data.py test_run.py evaluate.py main.py model.py NYUDataset TransposeDepthInput plot_grid plot_grid custom_loss_function abs_relative_difference coarse_validation train_fine rmse_linear train_coarse rmse_log scale_invariant fine_validation squared_relative_difference threeshold_percentage fineNet coarseNet transpose imshow ImageGrid numpy range pow sum pow sum exp ones where zeros sum max pow exp sqrt sum pow sum sqrt abs sum exp abs pow exp sum custom_loss_function backward coarse_model step zero_grad train type enumerate custom_loss_function fine_model backward coarse_model step zero_grad eval train type enumerate format print coarse_model eval type scalar_summary enumerate format fine_model print coarse_model eval type scalar_summary enumerate | ## Depth Map Prediction from a Single Image using a Multi-Scale Deep Network 1. [depth-map-prediction](https://github.com/imran3180/depth-map-prediction) 2. [unet-depth-prediction](https://github.com/DikshaMeghwal/unet-depth-prediction) ---- This repository is the first part of the project and Pytorch implementation of Depth Map Prediction from a Single Image using a Multi-Scale Deep Network by David Eigen, Christian Puhrsch and Rob Fergus. [Paper Link](https://cs.nyu.edu/~deigen/depth/depth_nips14.pdf) <p align="center"> <img src="https://s2.gifyu.com/images/output_Ky1KUn.gif" alt="https://gifyu.com/image/wZwF" alt="monodepth"> </p> Architecture ---------- | 2,395 |
inbaroren/improving-compgen-in-semparse | ['semantic parsing'] | ['Improving Compositional Generalization in Semantic Parsing'] | text2sql/training/metrics/sql_kb_acc.py text2sql/data/dataset_readers/dataset_utils/text2sql_utils.py drop_code/domain_languages/drop_language.py text2sql/modules/attention/bilinear_attention.py text2sql/data/dataset_readers/grammar_based_text2sql_v3.py text2sql/data/dataset_readers/text2sql_seq2seq_reader.py text2sql/modules/attention/coverage_attention_v2.py text2sql/state_machines/trainers/__init__.py text2sql/predictors/text2sql_grammar_predictor.py text2sql/models/seq2seq_attn_sup.py text2sql/semparse/contexts/draft.py text2sql/training/metrics/sql_global_templ_acc.py text2sql/data/preprocess/sql_templates.py text2sql/state_machines/trainers/maximum_marginal_likelihood.py text2sql/models/seq2seq_coverage.py text2sql/semparse/worlds/text2sql_world_v3.py text2sql/data/dataset_readers/dataset_utils/span_utils.py text2sql/models/seq2seq.py text2sql/training/metrics/__init__.py text2sql/data/dataset_readers/grammar_based_attn_sup.py text2sql/semparse/worlds/grmr_attn_sup_world.py text2sql/data/dataset_readers/seq2seq_spans.py text2sql/training/metrics/sql_validity_metric.py text2sql/modules/attention/coverage_attention.py text2sql/data/tokenizers/whitespace_tokenizer.py text2sql/state_machines/states/grammar_based_state.py text2sql/state_machines/states/sql_statelet.py text2sql/training/metrics/coverage_loss.py text2sql/state_machines/transition_function/coverage_transition_function.py text2sql/data/preprocess/text2sql_canonicalizer.py text2sql/data/preprocess/complete_vars_dict.py text2sql/models/text2sql_copynet.py text2sql/data/dataset_readers/no_grammar_based_text2sql.py text2sql/data/dataset_readers/text2sql_copynet_reader.py text2sql/data/dataset_readers/grammar_based_spans.py scripts/misc/cky_spans.py text2sql/models/grmr_attn_sup.py text2sql/training/metrics/token_sequence_accuracy.py text2sql/state_machines/transition_function/basic_transition_function.py text2sql/semparse/contexts/text2sql_table_context_v3.py text2sql/semparse/worlds/text2sql_nogrammar_world.py scripts/misc/alignment_utils.py text2sql/semparse/contexts/text2sql_table_context_v2.py text2sql/semparse/contexts/text2sql_nogrammar_table_context.py text2sql/data/preprocess/remove_join.py text2sql/data/dataset_readers/seq2seq_attn_sup.py text2sql/training/metrics/bleu.py text2sql/state_machines/states/__init__.py text2sql/state_machines/trainers/maximum_marginal_likelihood_attn_sup.py text2sql/models/text2sql_parser.py text2sql/models/grmr_over_spans.py text2sql/data/tokenizers/findollak_sql_tokeniser.py SetPassage2SetPassage Number Passage2SetNumber Number2Bool SetProp Passage2SetPassage Bool QuestionNumber DROPLanguage PassageSpan QuestionSpan main project Passage2Number YearDiff is_global_rule shorten_sql_tokens preprocess_alignment preprocess_alignment_to_print load_cond_prob load_cond_probs fast_align_text2sql inspect_alignment update_alignments_in_file shorten_sql_string inspect_alignment_file ignore_alignment fast_align_text2sql_only_update preprocess_alignment_file train_alignment fast_align_text2filteredsql load_alignment clean create_align_input main GrammarBasedAttnSupText2SqlDatasetReader GrammarBasedSpansText2SqlDatasetReader GrammarBasedText2SqlDatasetReader NoGrammarBasedText2SqlDatasetReader AttnSupSeq2SeqDatasetReader Seq2SeqSpansDatasetReader CopyNetText2SqlDatasetReader Seq2SeqDatasetReader EcpSpanExtractor retokenize_gold TableColumn column_has_numeric_type read_schema_dict disambiguate_col_names split_table_and_column_names SqlScope read_dataset_schema fix_specific_examples clean_and_split_sql_v2 process_sql_data_attn_sup clean_unneeded_aliases clean_and_split_sql replace_variables_sql resolve_primary_keys_in_schema process_sql_data_attn_sup_grmr clean_first_aliases process_sql_data_standard replace_variables process_sql_data split_data column_has_string_type SqlData resolve_primary_keys_in_schema_aliased all_names update_vars_dicts complete_vars_dict remove_join_string test_remove_join remove_join_from_file test_dealised prep_dealiased_sql dealiased_sql_schema_sanitize test sql_schema_sanitize dept_num_spacing standardize_word_forms am_and_pm n_credit process_sentence update_quotes untokenise FindollakSqlTokenizer tokenise StandardTokenizer Text2SqlTokenizer WhitespaceTokenizer AttnSupText2SqlParser SpansText2SqlParser DropSeq2Seq AttnSupSeq2Seq AttentionCoverageSeq2Seq CopyNetSeq2Seq Text2SqlParser BilinearAttention CoverageAdditiveAttention CoverageAdditiveAttention TextToSqlGrammarPredictor update_grammar_with_table_values update_grammar_with_global_values update_grammar_with_tables update_grammar_to_be_variable_free update_grammar_numbers_and_strings_with_variables update_grammar_with_untyped_entities update_grammar_values_with_variables test_all_sql_tokens_in_grammar update_grammar_with_table_values update_grammar_with_global_values update_grammar_with_tokens update_grammar_with_tables update_grammar_values_with_variables update_grammar_numbers_and_strings_with_variables update_grammar_with_table_values update_grammar_with_global_values update_grammar_with_tables update_grammar_to_be_variable_free update_grammar_numbers_and_strings_with_variables update_grammar_with_untyped_entities update_grammar_values_with_variables test_all_sql_tokens_in_grammar update_grammar_with_table_values update_grammar_with_global_values update_grammar_with_tables update_grammar_to_be_variable_free update_grammar_numbers_and_strings_with_variables update_grammar_with_untyped_entities update_grammar_values_with_variables update_grammar_with_derived_tabs_and_cols test_all_sql_tokens_in_grammar AttnSupGrammarBasedWorld Text2SqlNoGrammarWorld Text2SqlWorld GrammarBasedState SqlStatelet MaximumMarginalLikelihood MaximumMarginalLikelihoodAttnSup BasicTransitionFunction CoverageTransitionFunction BLEU CoverageAttentionLossMetric calculate_coverage_loss get_glob_templ GlobalTemplAccuracy KnowledgeBaseConstsAccuracy get_consts get_unaliased_consts SqlValidity test_tokens_to_sql TokenSequenceAccuracy join sorted list all_possible_productions print logical_form_to_action_sequence exit DROPLanguage get_nonterminal_productions keys lisp_to_nested_expression strip split sub replace append extend join list format print StandardTokenizer len filter append WhitespaceTokenizer str print wait returncode Popen append open endswith dict split float open load_cond_prob append open split get int str append enumerate split preprocess_alignment_to_print split append enumerate open get int split append enumerate get int split append enumerate join preprocess_alignment split append enumerate open update_alignments_in_file train_alignment mkdir create_align_input update_alignments_in_file inspect_alignment_file train_alignment mkdir create_align_input extract items preprocess_text replace append enumerate append split extend split_table_and_column_names replace split get replace extend fix_specific_examples sub findall split join list extend set sub findall split append append enumerate append enumerate get pop append zip get pop append zip TableColumn upper append enumerate open defaultdict upper append enumerate open items list preprocess_text join replace set add get items list preprocess_text replace set add append deepcopy join makedirs get join add clean_and_split_sql_v2 clean_unneeded_aliases disambiguate_col_names SqlData replace_variables resolve_primary_keys_in_schema split join add clean_and_split_sql_v2 clean_unneeded_aliases disambiguate_col_names SqlData replace_variables resolve_primary_keys_in_schema split join sub findall replace group append finditer enumerate len print complete_vars_dict append range len append deepcopy remove_join_string append search finditer group remove_join_string findall print set clean_unneeded_aliases clean_and_split_sql get list replace findall keys values split get list replace strip sub startswith findall keys finditer values split print len range sql_schema_sanitize prep_dealiased_sql print dealiased_sql_schema_sanitize range len dept_num_spacing n_credit standardize_word_forms am_and_pm sub append word_tokenize join group search update_quotes append split append update_quotes split update sorted list set values items sorted list extend column_has_numeric_type execute column_has_string_type get remove items list get items list isalpha upper column_has_numeric_type float column_has_string_type extend sorted extend sorted extend sorted extend append append list sorted set items minimum ones_like cumsum stack expand_dims sum array join add set join list items group add set sub finditer join tokens_to_sql_query metric SqlValidity zip get_metric split | # Improving Compositional Generalization In Semantic Parsing Official Github repo for the paper ["Improving Compositional Generalization In Semantic Parsing"](https://arxiv.org/abs/0000.0000). This repo is basically an allennlp package, so allennlp installation is required. Notice that Text2SQL models require version 0.9.0. For each model, an example for a configuration file is available at /training_config. To train a model, update the path, dataset name, and split name (for iid split use 'new_question_split', for program split use 'schema_full_split') in the configuration file. For example: `allennlp train ./improving-compgen-in-semparse/training_config/iid_ati_seq2seq_glove_config.jsonnet -s YOUR_OUTPUT_LOCATION --include-package text2sql` The parameters for each of the experiments in the paper are listed in /training_config/best_params.xlsx . | 2,396 |
infolab-usc/bdr-tweet | ['sentiment analysis'] | ['On Identifying Disaster-Related Tweets: Matching-based or Learning-based?'] | graph_plot.py svm_postprocess.py tweet_tokenizer.py preprocess_fema_data.py remove_spam_tweets.py word2vec_model.py count_classify_hash.py hash_tag.py doc2vec_train.py hashtag_tweet_filter.py napa_tweets_per_state.py svm_preprocess.py Utils.py neg_ratio.py remove_duplicate_tweets.py draw_partitions.py Tree.py merge.py Params.py Node.py word2vec_tweet_filter.py word2vec_fast.py plot.py filer_tweets_by_topics.py process_crisislex.py UtilsBDR.py neg_ratio_plot.py word2vec_sentifier.py geo_filter.py sentiment_analysis.py count_ratio.py get_tweets_by_id.py convent_hash_tag.py social_urgency_map.py tweet_count.py categorization.py spatial_stats.py disaster_type_classify.py main.py Quad_standard.py visualize_tweets.py Kd_pure.py affected_unaffected_filter.py filter_new.py extract_tweets.py similar_hash_classify.py sentiment_analyzer.py tweet_merger_geo_filter.py prepare_data.py create_vocab_from_tweets.py tweet_parser_won.py tweet_extracter.py word2vec.py temporal_stats.py sentiwordnet.py latent_semantic_indexing.py hash_filter split_datasets sensiment_analyzer html2unicode SentiStrength categorize_tweets hash_filter read_data save_vocab_tweets make_vocab LabeledLineSentence getLeafNode getPathData extract_tweets clean_line parse_tweet_json_gmove parse_tweet_json clean_and_tokenize num_token downgrade_emoji clean_tags website_tokenize num_alpha_token word_num_token break_tag is_emoji get_tweet_list get_tweet_id get_tweets_single parse_one_file get_tweets_bulk usage main dump_graph gen_markers_colors hash_filter Kd_pure latent_semantic_indexer plot_ratio Node Params split_data_cl_label split_data_cl split_data_ryan clean_line Quad_standard remove_spam_tweets sensiment_analyzer SentiStrength obj_score all_senti_synsets senti_synsets senti_synset __repr__ pos_score __str__ SentiSynset neg_score SentiWordNetCorpusReader read_social_map data_readin read_dyfi_map filter_tweets cdi_parse_csv read_shakemap_xml postprocess read_Test_Tweets read_train_data get_vocab read_test_data save_data Tree main get_tweets filter parse_tweets nltk_tokenize tokenize html2unicode performed_tasks_naive distance _step_function distance_to_rect is_intersect is_performed is_intersect_segment is_rect_cover _ccw zipf_cdf utility_naive is_rect_cover_rect rect_area rect_intersect rect_center utility zipf_pmf distance_point performed_tasks _wrg is_range_overlap performed_task rect_vertex_set is_cover_or_intersect acc_rate mbr_to_cellids mbr_to_path angle_bwn_two_points distance_km cell_coord website_tokenize break_tag latent_semantic_analysis is_emoji map_low_frequency_tokens clean_tags is_word score_across_dim utf8_to_ascii create_low_2_high_map create_token_mappings make_dictionary_and_corpus word_num_token num_token remove_clones remove_doc_label k_fold_roc downgrade_emoji num_alpha_token clean_and_tokenize website_tokenize break_tag latent_semantic_analysis is_emoji map_low_frequency_tokens clean_tags is_word utf8_to_ascii create_low_2_high_map create_token_mappings make_dictionary_and_corpus word_num_token num_token remove_clones k_fold_roc remove_doc_label downgrade_emoji num_alpha_token clean_and_tokenize website_tokenize break_tag latent_semantic_analysis is_emoji map_low_frequency_tokens clean_tags is_word score_across_dim utf8_to_ascii create_low_2_high_map create_token_mappings make_dictionary_and_corpus word_num_token num_token remove_clones remove_doc_label k_fold_roc downgrade_emoji num_alpha_token clean_and_tokenize k_fold_roc remove_doc_label map_low_frequency_tokens test_latent_semantic_analysis make_dictionary_and_corpus latent_semantic_analysis compile replace communicate encode Popen split print pr int list chr replace set filter findall join list print keys lower append max values join reader append tokenize open writer close writerows open list sorted write add set open getLeafNode time checkCorrectness Quad_standard print root buildIndex append se ne children nw popleft root deque sw append print loadtxt label_folder split join replace lower sub split join list strip write map close keys loads tokenize open join list print strip write map close keys loads tokenize open append isupper append break_tag append split append split append append append is_emoji TweetTokenizer HTMLParser apply PorterStemmer tokenize join replace write map sub encode statuses_lookup str list get_tweet_list chdir print getcwd glob close open range makedirs print exit basicConfig getopt getLogger glob debug parse_one_file usage DEBUG setLevel OAuthHandler get_tweets_single API get_tweets_bulk set_access_token list set dict zip append show remove text set_xlabel rotanimate set enumerate set_ylabel scatter figure gen_markers_colors set_zlabel gca annotate legend range append len print show mktime plot xlabel timetuple ylabel range write open close open items list print close write open Set int print Counter dict append range len pos _synset_from_pos_and_offset offset synset append name synsets senti_synset items list _synset_from_pos_and_offset append parse int read attrib endswith print text fromstring getchildren urlopen split int print loadtxt min max transpose append extend reader open writer read_Test_Tweets replace print len close writerows append float range open replace close open append range len str list join reader print sort keys append tokenize open str list join reader print sort extend keys append tokenize open close write open Twython append str search replace sort str tokenize TweetTokenizer html2unicode str list replace html2unicode TweetTokenizer map sub remove_handles findall radians cos atan2 sqrt sin abs _distance STEPS int MAR append abs range len ZIPF_STEPS int searchsorted _step_function max random sorted cumsum transpose min delete searchsorted distance uniform _acc_rate is_performed len list sorted performed_task shuffle range len distance is_performed _acc_rate n_box distance_to_rect _acc_rate sqrt MAR logical_or any __is_intersect min max zeros all logical_and zeros all is_intersect cumsum random bisect_right int y_min x_min GRID_SIZE GRID_SIZE atan2 radians cos atan2 sqrt sin abs decode replace split print write similarity title max len append remove_clones print create_low_2_high_map stem is_word append PorterStemmer append split Dictionary defaultdict TfidfModel append remove_doc_label concat append LsiModel DataFrame score write LogisticRegression average latent_semantic_analysis logspace unique append cross_val_score fit StratifiedKFold roc_curve LogisticRegression predict_proba linspace as_matrix append enumerate auc append concat DataFrame remove_doc_label | # Introduction # This repository contains geotagged tweets for 15 natural disasters across the U.S.A, each corresponds to a disaster occurred in 2014-2015. Disasters folder contains geo tagged tweets for 15 disasters accross USA. 1. California Fir 2. Iowa Storm, Tornadoes and Flood 3. Iowa Storm, Tornadoes and Flood 2 4. Iowa Storm 5. Jersey Storm 6. Michigan Storm 7. Napa Earthquake | 2,397 |
inkplatform/beta-vae | ['style transfer'] | ['Deep Feature Consistent Variational Autoencoder'] | models/celeba_model.py analyze.py train.py test/data_test.py models/casia_model.py test/path_test.py test/data_test_1.py models/feret_model.py preprocess.py config.py utils.py get_z get_attr_ims linear_interpolate plot_loss latent_arithmetic generate get_average_z get_attributes ImageDiskLoader split_dataset get_ims ImageMemoryLoader get_attr train test restore show_images plot restore_latest write_log save read_log DFCVAE BetaVAE DFCVAE BetaVAE DFCVAE BetaVAE to eval to eval eval linspace get_z zeros eval unsqueeze latent_size eval linspace list plot xlabel ylabel title savefig figure legend zip randint get_attr ImageDiskLoader len append append im_transform crop open time format model backward print len zero_grad dataset item to step loss enumerate ctime print eval load Parameter data str int list isinstance sorted print join size set copy_ add keys is_available prod state_dict glob int sorted restore sorted remove glob print makedirs dirname state_dict open dump dirname makedirs exists show subplot str axis imshow title savefig figure enumerate len show xlabel ylabel title figure | # Face Generation Using Variational Autoencoders This repo contains training code for two different VAEs implemented with Pytorch. <br /> I used the [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) Dataset for training, with 182637 training images and 19962 testing images. <br /> Trained model can be found in [/checkpoints](/checkpoints).  ## Model structures: #### [β-VAE [1]](https://openreview.net/pdf?id=Sy2fzU9gl):  #### [DFC-VAE [2]](https://arxiv.org/abs/1610.00291):  | 2,398 |
inon-peled/qtip_code_pub | ['traffic prediction'] | ['QTIP: Quick simulation-based adaptation of Traffic model per Incident Parameters'] | cache.py compare_different_model_types_for_m_abnormal.py regressor.py run_normal_scenarios_via_api_improved.py compare_normal_abnormal.py GaussianProcessRegressor.py regression_models.py multiprocessing_stack_trace.py various_plots_for_m_normal_and_others.py run_incident_scenarios_with_1_or_2_vehicles_and_5_steps_per_sim_second.py get_vectors.py common_definitions.py DeepNNRegressor.py save_results_for_all_executed_scenarios.py plotter.py copy_results_from_m_drive.py __generate_file_path __make_key cache __find_cached cache_method __read_cache_index_under_lock __add_to_cache get_combinations create_dir get_vectors_for_fitting convert_to_blocked_lanes remove_uplink_and_downlink to_latex_table get_df compare_normal_abnormal_just_rmse_known_and_unknown_lanes compare_normal_abnormal rename_atts_after_manual_copying copy_and_rename_atts remove_incident_scenarios_without_results DeepNNRegressor TrainedModel TrainedModel GaussianProcessRegressor get_vectors_from_multiple_att_input_files __get_vectors __get_speeds_from_att_which_is_already_1min_aggregated __get_df_before multiprocessed shorten_column_names plot_45_degrees_line Plotter DistressSignal1Or2Vehicles get_fresh_randomizer various_regression compute_stats Regressor run_all_combinations __run_given_combinations_of_incident_parameters run_multiple_bathces_because_vissim_sometimes_crashes __run_missing_combinations_if_previous_run_failed_midway_through log __get_incident_parameters_from_directory_name Link74IncidentScenarioCreatorFor1Or2Vehicles Link74IncidentScenarioRunnerFor1Or2Vehicles multiprocessed_run_num_simulations_for_each_scenario_params NormalScenarioCreator run_all_combinations consume_tasks log get_scenario_dirs_with_missing_results transfer_all_executed_scenarios_to_one_directory open_and_close_vissim_to_save_results degradation_m_ordinary_hexbin __plot_various_time_series makedirs join __generate_file_path exists makedirs list product glob join join DataFrame read_csv to_csv join print extend assign compare_normal_abnormal drop glob join copyfile makedirs glob join rename dirname print rmtree join glob rename reset_index __get_speeds_from_att_which_is_already_1min_aggregated drop linspace plot get_output_dir makedirs compute_stats print log send run_single_simulation Link74IncidentScenarioRunnerFor1Or2Vehicles range list shuffle log get_combinations __run_given_combinations_of_incident_parameters join list get_combinations frozenset __run_given_combinations_of_incident_parameters glob map Counter rmtree dirname elements dict str search print join run_all_combinations range get join str Dispatch run_simulation_batch copytree randint log create_tasks_queue list glob join copytree makedirs print LoadNet m_ordinary_degradation_on_test_vectors join compare_scatter_of_normal_model_with_and_without_other_links glob time_series_degradation_of_normal_model_with_and_without_other_links Regressor degradation_m_ordinary_hexbin | Code for paper "QTIP: Quick simulation-based adaptation of Traffic model per Incident Parameters". * Code entry point for running simulations: `run_incident_scenarios_with_1_or_2_vehicles_and_5_steps_per_sim_second.py` * Code entry point for regression models: `regression_models.py` * Code entry point for measuring deterioration of predictive quality under incidents: `compare_normal_abnormal.py` | 2,399 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.