repo
stringlengths 8
116
| tasks
stringlengths 8
117
| titles
stringlengths 17
302
| dependencies
stringlengths 5
372k
| readme
stringlengths 5
4.26k
| __index_level_0__
int64 0
4.36k
|
---|---|---|---|---|---|
cyh4/FCTSFN | ['semantic segmentation'] | ['A Fully Convolutional Two-Stream Fusion Network for Interactive Image Segmentation'] | solve_32s.py surgery.py solve_fctsfn.py solve_8s.py util.py run_example.py voc_click_pattern_layers.py voc_click_pattern_batch_input_data_aug_layers.py score.py solve_16s.py compute_hist do_seg_tests fast_hist do_seg_tests_fg seg_tests seg_tests_fg getClickMap fromarray join uint8 channels astype mkdir save zeros forward print do_seg_tests iter share_with net format print channels close compute_hist sum diag print do_seg_tests_fg iter share_with net format print channels close compute_hist sum diag minimum arange ones square sqrt meshgrid array range | # FCTSFN This repository includes the [caffe][1] model and codes for the following paper on interactive image segmentation: Y.Hu, A. Soltoggio, R. Lock and S. Carter. A Fully Convolutional Two-Stream Fusion Network for Interactive Image Segmentation. Neural Networks, vol.109, pp.31-42, 2019. The codes are tested with Python 2.7 and Ubuntu 16.04. ## Example run Run `run_example.py` on an example to use the model file and pre-trained weights. ## Model training See `./data/example_data/voc` an example of the way to organize training data (note that this is not the full data; see above paper on more detailed information on the data used in the paper). Run `solve_32s.py`, `solve_16s.py`, `solve_8s.py` for the training of TSLFN subnet in FCTSFN from stride 32 to stride 8. Run `solve_fctsfn.py` for the training of MSRN subnet in FCTSFN. | 1,800 |
cyk1337/UrbanDict | ['word embeddings'] | ['How to Evaluate Word Representations of Informal Domain?'] | demo/__init__.py UrbanDictScraper/UD_spider/items.py UD_Extractor/Bootstrapping/Tuple.py trainEmbedding/mittens/Glob_settings.py trainEmbedding/mittens/eval/eval_all_vecs.py trainEmbedding/fastText/fastText-0.1.0/eval.py SeqLabeling/crf_trial_backup/trainPyCRF.py HashtagPrediction/settings.py trainEmbedding/GloVe-1.2/eval/python/python2_evaluate.py HashtagPrediction/data_loader.py UrbanDictScraper/UD_spider/middlewares.py SeqLabeling/load_data.py UrbanDictScraper/UD_spider/settings.py preprocess/Preprocess_sub.py UrbanDictScraper/UD_spider/_crawl_utils.py HashtagPrediction/find_best_val_acc.py SeqLabeling/crf_trial_backup/load_data_NLTK.py UD_Extractor/Bootstrapping/__init__.py calcSim/calcCorrelation.py preprocess/gen_hashtag_data.py UD_Extractor/eval.py trainEmbedding/mittens/prep_scripts/old_vocab_to_new.py UD_Extractor/_config.py preprocess/__init__.py SeqLabeling/__init__.py SeqLabeling/crf_trial_backup/trainCRF_NLTK.py UD_Extractor/test.py SeqLabeling/SL_config.py demo/forms.py UrbanDictScraper/run.py trainEmbedding/mittens/prep_scripts/twokenize.py preprocess/twokenize.py SeqLabeling/HandLabel-rm_index_duplicate/get_list.py UD_Extractor/baseline.py trainEmbedding/mittens/prep_scripts/concatenate_corpus.py trainEmbedding/mittens/prep_scripts/twitter_process.py preprocess/data_split.py UD_Extractor/plot_wordcloud.py UD_Extractor/Bootstrapping/Definition.py calcSim/sample2evalGold.py SeqLabeling/CRF.py trainEmbedding/mittens/gen_wiki.py UrbanDictScraper/UD_spider/spiders/UD.py SeqLabeling/crf_trial_backup/label_processing_NLTK.py preprocess/TwitterProcess.py SeqLabeling/sample2eval.py demo/config.py UD_Extractor/Bootstrapping/Seed.py UrbanDictScraper/UD_spider/spiders/UD_API.py HashtagPrediction/plot_fit.py HashtagPrediction/train_cnn.py UrbanDictScraper/UD_spider/pipelines.py SeqLabeling/label_processing.py SeqLabeling/_utils.py UrbanDictScraper/__init__.py UD_Extractor/ie_utils.py UD_Extractor/Bootstrapping/Pattern.py demo/run.py UD_Extractor/Bootstrap.py demo/_view_func.py UD_Extractor/BootstrapIE.py trainEmbedding/GloVe-1.2/eval/python/evaluate.py calcSim/calcMAP.py trainEmbedding/mittens/prep_scripts/wikipedia_process.py UrbanDictScraper/UD_spider/spiders/__init__.py trainEmbedding/w2v/W2V.py filter_variant_tuple evaluate_pair load_embedding evaluate_all_pairs gen_goldTuple fetch_gold_from_db sample4handcheck QueryForm demo_page extract_variant_spelling search_UrbanDict find_all_entries load_test load_pretrained_model load_tweets plot_all_history save_history visialize_model plot_fit save_fig write2file take_top_N count_tweets allcaps hashtag tokenize traverse_docs tweet_process save_en_tweets main tokenizeRawTweetText simpleTokenize addAllnonempty splitToken squeezeWhitespace splitEdgePunct regex_or tokenize normalizeTextForTagger SelfTrainCRF conn_db load_unlabel_data load_data gen_label eval_single_exp gen_sample100 main count_estimated_label file_len timeit days_hours_mins_secs conn_db load_unlabel_data load_data gen_label word2features split_train_test_set get_labels mk_prediction get_tokens eval_Test main trainPyCRF extract_features word2features split_train_test_set get_labels mk_prediction get_tokens eval_Test main trainPyCRF extract_features compat_splitting similarity main evaluate_vectors main evaluate_vectors processfile verblog traversedocs concatenate_main log tokenizeRawTweetText simpleTokenize addAllnonempty splitToken squeezeWhitespace splitEdgePunct regex_or tokenize normalizeTextForTagger train_cbow train_sg train_sg_ns train_cbow_ns Basic Baseline main Bootstrap main BootstrapIE update_variant_db load_iter update_label_db sample2Estimate_prec get_defns_from_defids count_num update_sample_dir read_valid_file eval_recall save_iter _count_and_write_db conn_db normalized_levenshtein detokenize days_hours_mins_secs iterative_levenshtein load_pkl dump_pkl fetch_data mask basic_plot iterative_levenshtein Definition Pattern Seed Tuple UdSpiderItem UdSpiderSpiderMiddleware UdSpiderDownloaderMiddleware SyncMySQLPipeline CsvExporterPipeline AsyncMySQLPipeline changeTime days_hours_mins_secs _time_log _err_log _filter_word_log UdSpider UdApiSpider dict list print keys print close lower open append split T norm print index dot join format print evaluate_pair len system close load_embedding filter_variant_tuple array open system close open iterrows format read_sql create_engine write to_csv connect close lower split open SubmitField StringField SelectField QueryForm data search_UrbanDict validate_on_submit get urljoin text print select dict BeautifulSoup get_text zip append load SelfTrainCRF format predict_single print lower nlp startswith append extract_features dict extract_variant_spelling find_all_entries count read_csv read_csv dict subplot list plot save_fig xlabel grid ylabel title figure legend range len join format print savefig mkdir join format print plot_model mkdir join from_dict format print to_csv history mkdir grid show subplot list ylabel title legend read_csv range format plot mkdir listdir enumerate join xlabel print figure save_fig len print dict items list len join format group isupper split group re_sub format lower join listdir tweet_process traverse_docs sub addAllnonempty len splitEdgePunct append range finditer split append strip search replace unescape tokenize normalizeTextForTagger connect create_engine int len split load join iterrows format print extend gen_label lower nlp append read_csv load str iterrows print tolist lower nlp append read_csv enumerate join format print system mkdir listdir join sorted print system listdir communicate Popen join count_estimated_label listdir eval_single_exp startswith tolist apply apply append extend train_test_split load_data set_params print Trainer zip append train print classification_report Tagger open join format print Tagger tag load_unlabel_data zip extract_features enumerate open split_train_test_set join trainPyCRF eval_Test shape_ tag_ lemma_ text pos_ is_stop is_alpha dep_ range print print norm items list T add_argument shape ArgumentParser parse_args sum evaluate_vectors zeros len int T arange print min dot flatten ceil zeros float sum array range len print log join endswith close verblog open verblog processfile walk rstrip rootdir add_argument verbose ArgumentParser traversedocs parse_args save_word2vec_format Word2Vec save save_word2vec_format Word2Vec save save_word2vec_format Word2Vec save save_word2vec_format Word2Vec save init_bootstrap Bootstrap seeds BootstrapIE print read_valid_file len print join system join conn_db read_sql join listdir get_defns_from_defids insert print to_csv set mkdir defid_list sample DataFrame append execute conn_db print len execute str conn_db nunique update_variant_db iterrows join update_label_db print tuple insert tolist dirname read_csv _count_and_write_db join sorted listdir join sorted listdir system join mkdir join mkdir strip min range len read_sql connect create_engine show join axis imshow savefig figure generate show join recolor axis set to_file imshow title savefig figure to_array generate array open Field join mkdir join mkdir join mkdir divmod urljoin ascii_uppercase append urljoin ascii_uppercase append | # Discovering spelling variants on Urban Dictionary Source code of the paper [How to Evaluate Word Representations of Informal Domain?](https://arxiv.org/abs/1911.04669) ## Scraping data from [Urban Dictionary](https://www.urbandictionary.com/) :bamboo: * Scraping data from webpage: ```diff + scrapy crawl UD ``` * Scrapying data via API: ```diff + scrapy crawl UD_API | 1,801 |
cyn228/Yelp-Sentiment | ['classification'] | ['Predicting the Sentiment Polarity and Rating of Yelp Reviews'] | classify.py yelp_utils.py classify print_usage valid_args loadData numLines int time print loadData SGDClassifier LogisticRegression Pipeline numLines float MultinomialNB predict fit print append loads open | # Sentiment Predictor For Yelp Review ## About Given a Yelp review, this project builds two types of classifiers to assign to the review 1) a positive or negative sentiment, and 2) a rating in the interval [1, 5]. The classifier is trained with either Naive Bayes, SVM, or Logistic Regression as the model, and the [Yelp dataset](http://www.yelp.com/dataset_challenge/). The corresponding paper *Predicting the Sentiment Polarity and Rating of Yelp Reviews* may be found on arXiv [here](http://arxiv.org/abs/1512.06303). ## Motivation From Section 1.2 of *Predicting the Sentiment Polarity and Rating of Yelp Reviews*: "It is useful for Yelp to associate review text with a star rating (or at least a positive or negative assignment) accurately in order to judge how helpful and reliable certain reviews are. Perhaps users could give a good review but a bad rating, or vice versa. Also Yelp might be interested in automating the rating process, so that all users would have to do is write the review, and Yelp could give a suggested rating." ## Example The user can specify which type of classification to perform (positive/negative or 5-star), which technique to use (Naive Bayes, SVM, or Logistic Regression), and how much data to use. Assuming you have the Yelp review dataset in the same directory as `classify.py`, the following will build a Logistic Regression classifier for 5-star classification using 80% of the data: ``` classify.py svm False 80 | 1,802 |
cynicaldevil/neural-style-transfer | ['style transfer'] | ['Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization'] | train.py | Arbitrary neural style transfer implementation in PyTorch. Based on https://arxiv.org/abs/1703.06868 | 1,803 |
cysmith/neural-style-tf | ['style transfer'] | ['A Neural Algorithm of Artistic Style', 'Artistic style transfer for videos', 'Preserving Color in Neural Artistic Style Transfer'] | neural_style.py get_noise_image style_layer_loss pool_layer write_image_output get_style_images get_prev_frame maybe_make_directory get_content_weights get_bias write_video_output read_image get_optimizer write_image sum_masked_style_losses sum_shortterm_temporal_losses warp_image parse_args get_weights sum_longterm_temporal_losses conv_layer normalize convert_to_original_colors render_single_image postprocess relu_layer stylize build_model preprocess sum_style_losses check_image get_longterm_weights render_video main temporal_loss read_weights_file minimize_with_lbfgs get_content_frame read_flow_file content_layer_loss get_init_image get_prev_warped_frame gram_matrix sum_content_losses get_content_image minimize_with_adam get_mask_image mask_style_layer normalize video_output_dir style_layer_weights add_argument style_imgs_weights maybe_make_directory img_output_dir ArgumentParser content_layer_weights video relu_layer print Variable pool_layer loadmat shape verbose zeros conv_layer model_weights get_shape format print conv2d verbose get_shape format relu print verbose get_shape format print max_pool verbose avg_pool constant reshape size constant pow get_shape value reduce_sum get_shape value reduce_sum gram_matrix pow reshape transpose matmul convert_to_tensor get_shape value multiply stack get_mask_image append expand_dims range convert_to_tensor style_layers zip style_layer_weights style_imgs_weights assign mask_style_layer style_mask_imgs run convert_to_tensor style_layers style_layer_weights style_imgs_weights assign zip run convert_to_tensor content_layers assign zip content_layer_weights run size float32 reduce_sum cast float l2_loss maximum get_content_weights prev_frame_indices range get_prev_warped_frame assign prev_frame_indices get_longterm_weights range run get_prev_warped_frame assign get_content_weights temporal_loss run astype float32 IMREAD_COLOR preprocess check_image imread postprocess imwrite copy astype copy list readlines len map float32 dstack zeros array range split sum makedirs minimize print assign verbose global_variables_initializer run format minimize print assign verbose eval global_variables_initializer run AdamOptimizer learning_rate ScipyOptimizerInterface join format write_image video_output_dir zfill style_layers style_imgs_weights maybe_make_directory open str write_image img_name init_img_type style_weight content_layers format max_iterations close zip max_size tv_weight optimizer join style_imgs content_img write style_mask_imgs content_weight img_output_dir get_prev_frame get_prev_warped_frame get_noise_image noise_ratio join format video_input_dir zfill read_image join content_img_dir float astype float32 IMREAD_COLOR shape preprocess check_image resize max_size imread join style_imgs astype float32 IMREAD_COLOR shape preprocess check_image resize append imread style_imgs_dir seed astype float32 join content_img_dir astype float32 IMREAD_GRAYSCALE check_image resize imread amax join format video_output_dir zfill IMREAD_COLOR check_image imread str join format read_flow_file video_input_dir astype float32 get_prev_frame preprocess str join format video_input_dir read_weights_file remap shape zeros float range postprocess COLOR_LUV2BGR COLOR_YCR_CB2BGR COLOR_BGR2YUV astype float32 COLOR_BGR2LAB COLOR_BGR2YCR_CB merge preprocess COLOR_BGR2LUV COLOR_LAB2BGR COLOR_YUV2BGR cvtColor split get_content_image content_img get_style_images end_frame range start_frame parse_args render_single_image render_video video | # neural-style-tf This is a TensorFlow implementation of several techniques described in the papers: * [Image Style Transfer Using Convolutional Neural Networks](http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf) by Leon A. Gatys, Alexander S. Ecker, Matthias Bethge * [Artistic style transfer for videos](https://arxiv.org/abs/1604.08610) by Manuel Ruder, Alexey Dosovitskiy, Thomas Brox * [Preserving Color in Neural Artistic Style Transfer](https://arxiv.org/abs/1606.05897) by Leon A. Gatys, Matthias Bethge, Aaron Hertzmann, Eli Shechtman Additionally, techniques are presented for semantic segmentation and multiple style transfer. The Neural Style algorithm synthesizes a [pastiche](https://en.wikipedia.org/wiki/Pastiche) by separating and combining the content of one image with the style of another image using convolutional neural networks (CNN). Below is an example of transferring the artistic style of [The Starry Night](https://en.wikipedia.org/wiki/The_Starry_Night) onto a photograph of an African lion: | 1,804 |
czarmanu/tiramisu_keras | ['remote sensing image classification', 'change detection for remote sensing images', 'semantic segmentation'] | ['LAKE ICE MONITORING WITH WEBCAMS'] | train.py helper.py result.py test.py data_loader.py dataGenerator.py modelTiramisu.py saveHDF5.py sparse DataGenerator recover_set_list get_num_aug get_mean get_crop main DataLoader DataSet classTocolor computeIoU normalized colorToclass one_hot_reverse one_hot_it Tiramisu main SaveHDF5 main prediction main train int floor value where unique true_divide zeros sum range len print unique append array range len hdf5_dir save_hdf5 in_dir DataLoader generate hdf5_file parse_args zeros float32 equalizeHist zeros range zeros argmax range zeros argmax range true_divide zeros range zeros range SaveHDF5 image_dir label_dir writeHDF5 hdf5_name imwrite num_crop_per_im floor save one_hot_reverse open create sorted ones len predict_on_batch set_list c colorToclass expand_dims sum range value DataGenerator model_from_json close mean load_weights get_crop int read print r File write crop_list zeros array amax makedirs result_dir prediction model_name save_weights save open str create list balancing TensorBoard aug len ylabel strftime class_weight title savefig legend generate Nadam format plot DataGenerator close fit_generator mean keys compile print xlabel EarlyStopping File write crop_list figure ModelCheckpoint array makedirs train | # Lake Ice Monitoring with Webcams This repository is the implementation (keras) of: Xiao M., Rothermel M., Tom M., Galliani S., Baltsavias E., Schindler K.: [Lake Ice Monitoring with Webcams](https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/IV-2/311/2018/isprs-annals-IV-2-311-2018.pdf), ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, IV-2, pages 311-317, 2018  This work is part of the [Lake Ice Project (Phase 1)](https://prs.igp.ethz.ch/research/completed_projects/integrated-monitoring-of-ice-in-selected-swiss-lakes.html). Here is the link to [Phase 2](https://prs.igp.ethz.ch/research/current_projects/integrated-lake-ice-monitoring-and-generation-of-sustainable--re.html) of the same project. The implementation was extended and modified starting from the one in [0bserver07/One-Hundred-Layers-Tiramisu ](https://github.com/0bserver07/One-Hundred-Layers-Tiramisu). ##### Pre-requisites: - Numpy - Keras | 1,805 |
czbiohub/noise2self | ['denoising'] | ['Noise2Self: Blind Denoising by Self-Supervision'] | models/modules.py mask.py util.py models/dncnn.py models/singleconv.py models/unet.py models/babyunet.py gaussianprocess/gp.py models/models.py pixel_grid_mask interpolate_mask Masker test_mse_rescale plot_tensors plot_images show_data smooth gpuinfo getbestgpu clamp_tensor ssim test_psnr tensor_to_numpy show test_percentile_normalizer psnr gaussian scale_tensor expand normalize get_args create_window test_bernoulli_noise plot_grid getfreegpumem mse show_tensor normalize_mi_ma PercentileNormalizer random_noise _ssim gaussian_process_posterior rbe_kernel convolve2d_vectorized grid_distances GPDataset sample_rbe_gp make_test_gp_dataset BabyUnet DnCNN get_model ConvBlock pad_circular SingleConvolution Unet zeros range conv2d device to sum array transpose imshow numpy max clip numpy subplots set_title set_ticks imshow range len cat show_tensor subplots concatenate reshape set_ticks imshow range imshow set_ticks plot_grid len concatenate bernoulli ones clamp shape device to poisson ones random_noise manual_seed ones randn unsqueeze randn batchwise_mean Tensor contiguous unsqueeze pow conv2d create_window size type_as get_device cuda is_cuda conv2d device to sum array percentile dtype astype clip evaluate astype uint16 PercentileNormalizer communicate Popen split print getfreegpumem index append max range parse_args add_argument ArgumentParser minimum arange concatenate multivariate_normal reshape zeros grid_distances inv solve dot repeat len seed makedirs MSE GPDataset cat | # Noise2Self: Blind Denoising by Self-Supervision This repo demonstrates a framework for blind denoising high-dimensional measurements, as described in the [paper](https://arxiv.org/abs/1901.11365). It can be used to calibrate classical image denoisers and train deep neural nets; the same principle works on matrices of single-cell gene expression. <img src="https://github.com/czbiohub/noise2self/blob/master/figs/hanzi_movie.gif" width="512" height="256" title="Hanzi Noise2Self"> *The result of training a U-Net to denoise a stack of noisy Chinese characters. Note that the only input is the noisy data; no ground truth is necessary.* ## Images The notebook [Intro to Calibration](notebooks/Intro%20to%20Calibration.ipynb) shows how to calibrate any traditional image denoising model, such as median filtering, wavelet thresholding, or non-local means. We use the excellent [scikit-image](www.scikit-image.org) implementations of these methods, and have submitted a PR to incorporate self-supervised calibration directly into the package. (Comments welcome on the [PR](https://github.com/scikit-image/scikit-image/pull/3824)!) The notebook [Intro to Neural Nets](notebooks/Intro%20to%20Neural%20Nets.ipynb) shows how to train a denoising neural net using a self-supervised loss, on the simple example of MNIST digits. The notebook runs in less than a minute, on CPU, on a MacBook Pro. We implement this in [pytorch](www.pytorch.org). | 1,806 |
czhang99/SynonymNet | ['entity disambiguation'] | ['Entity Synonym Discovery via Multipiece Bilateral Context Matching'] | src/train_siamese.py src/train_triplet.py src/model.py src/utils.py __init__.py SiameseModel TripletModel inference valid inference valid createVocabulary siamese_loss loadVocabulary evaluateTopN __splitTagType margin_loss sentenceToIds padSentence DataProcessor str concatenate roc_curve close average_precision_score get_batch_siamese info append auc DataFrame DataProcessor run concatenate close groups evaluateTopN get_batch_siamese append read_csv DataProcessor run get_batch_triple get_batch_triple greater float32 pow cast less str list keys mean nan_to_num info append array len get isinstance index append split split | # Entity Synonym Discovery via Multipiece Bilateral Context Matching This project provides source code and data for SynonymNet, a model that detects entity synonyms via multipiece bilateral context matching. Details about SynonymNet can be accessed [here](https://arxiv.org/abs/1901.00056), and the implementation is based on the Tensorflow library. ## Quick Links - [Installation](#installation) - [Usage](#usage) - [Data](#data) - [Results](#results) - [Acknowledgements](#acknowledgements) ## Installation | 1,807 |
czk32611/GEDDnet | ['gaze estimation'] | ['Towards High Performance Low Complexity Calibration in Appearance Based Gaze Estimation'] | code/PreProcess.py code/GEDDnet.py code/infer.py code/PreProcess_eyecenter.py code/tf_utils.py code/train.py GEDDnet_infer GEDDnet _2d2vec _angle2error sigmoid main shape_to_np preprocess_face_image _vec22d flip_images randomRotate preprocess_eye_image pre_process_face_images pre_process_eye_images WarpNCrop WarpNDraw point_to_matrix grayNhist max_pool_2x2 dilated2d conv2d dense_to_one_hot bias_variable weight_variable _2d2vec _vec22d _angle2error sigmoid creatIter main load max_pool_2x2 relu conv2d l2_loss load resize_images max_pool_2x2 relu concat conv2d stack minimum arccos maximum pi sum zeros range VideoCapture float64 camera_mat Saver global_variables placeholder resizeWindow append WINDOW_NORMAL get_frontal_face_detector GEDDnet_infer int_ set shape_predictor namedWindow mat reshape inv float32 moveWindow zeros loadmat array stack warpAffine _vec22d randint getRotationMatrix2D dot vstack zeros array range resize_images pad_to_bounding_box constant concat crop_to_bounding_box rotate stack uniform cast round map_fn resize_images pad_to_bounding_box constant crop_to_bounding_box stack uniform cast round map_fn shape uniform array arange uint8 COLOR_RGB2GRAY equalizeHist astype stack cvtColor int arctan2 astype degrees getRotationMatrix2D sqrt warpAffine norm point_to_matrix int_ reshape mat hstack transpose astype grayNhist dot cross mean shape vstack resize abs warpPerspective line tuple transpose matmul array truncated_normal random_normal arange int_ astype zeros ravel array from_tensor_slices make_initializable_iterator shuffle get_next batch _2d2vec arange TRAINABLE_VARIABLES where abs num_subject open str GEDDnet data_dir get_collection reduce_sum pre_process_face_images range format creatIter join system write AdamOptimizer reduce_mean UPDATE_OPS bool pre_process_eye_images | # GEDDnet: A Network for Gaze Estimation with Dilation and Decomposition  ## Dilated Convolution We use dilated-convolutions to capture high-level features at high-resolution from eye images. We replace some regular convolutional layers and max-pooling layers of a VGG16 network by dilated-convolutional layers with different dilation rates. ## Gaze Decomposition We propose gaze decomposition for appearance-based gaze estimation, which decomposes the gaze estimate into the sum of a subject-independent term estimated from the input image by a deep convolutional network and a subject-dependent bias term. During training, both the weights of the deep network and the bias terms are estimated. During testing, if no calibration data is available, we can set the bias term to zero. Otherwise, the bias term can be estimated from images of the subject gazing at different gaze targets. The proposed gaze decompostion method enables low complexity calibraiton, i.e., using calibration data collected when subjects view only one or a few gaze targets and the number of images per gaze target is small. ## Setup ### 1. Prerequisites Tensorflow == 1.15 | 1,808 |
czyssrs/Logic2Text | ['text generation'] | ['Logic2Text: High-Fidelity Natural Language Generation from Logical Forms'] | gpt_base/encoder.py gpt_base/preprocess.py execution/execute.py gpt_base/Main.py gpt_base/model.py execution/APIs.py gpt_base/utils.py gpt_base/DataLoader.py gpt_base/SeqUnit.py ExeError fuzzy_match_filter nth_maxmin fuzzy_compare_filter obj_compare agg is_ascii hop_op Node execute_all DataLoader Preprocessor bytes_to_unicode get_pairs get_encoder Encoder main train evaluate mlp default_hparams norm past_shape model block merge_states gelu attention_mask gpt_emb_init_tune positions_for conv1d softmax expand_tile attn shape_list split_states test_split_for_rouge linear_table_in make_html_safe make_dirs preprocess text2id check_res clean_space SeqUnit read_word2vec create_init_embedding read_word2vec_zip progress_bar load_vocab make_html_safe bleu_score write_word get_current_git_version get_rouge write_log check_res format_time get_res reset_index replace extract int reset_index replace all to_datetime astype map findall float DataFrame datetime float extract all replace astype mean sum fillna extract int list all replace to_datetime astype map DataFrame values defaultdict print tqdm Node eval append DataFrame enumerate append list range ord add set str time join epoch evaluate train_set model print flag_values_dict DataLoader reset write_log report save use_table range makedirs decode strip DataLoader setLevel max get_res open tolist bleu_score generate append use_table dev_set test_set close convert_and_evaluate generate_beam join WARNING Rouge155 min write index tqdm array makedirs join ConfigProto gpt_model_name gpt_model_path as_list shape exp reduce_max shape_list shape_list range convert_to_tensor ndims append strip split replace isalpha strip enumerate join makedirs close open join mkdir time test_split_for_rouge print text2id float join mkdir realpath join dirname list print strip len map infolist array open ZipFile split read_word2vec print endswith read_word2vec_zip load_vocab sqrt uniform load_word2vec_format range len int time join format_time write append range flush len int join write open Repo hexsha print append join index split load deepcopy readlines strip zip append open | # Logic2Text ## Data In the dataset folder, we have the full dataset (all_data.json), and the train test split (train.json, valid.json, test.json). Each example is in a dictionary of the following format: ``` "topic": table caption, "wiki": link to the wikipedia page, "action": logic type, "sent": description, "annotation": raw annotation data from the mturk, | 1,809 |
d-maurya/hypred_tensorEVD | ['link prediction'] | ['Hyperedge Prediction using Tensor Eigenvalue Decomposition'] | baselines/HyperedgePrediction_TD/dataset_utils/movielens_network.py baselines/HyperedgePrediction_TD/dataset_utils/twitter_network.py baselines/HyperedgePrediction_TD/dataset_utils/amazon_network.py realdata_utils.py baselines/HyperedgePrediction_TD/__main__.py baselines/HyperedgePrediction_TD/hyperedge_prediction.py baselines/HyperedgePrediction_TD/dataset_utils/dblp_network.py baselines/HyperedgePrediction_TD/dataset_utils/cora_citeseer_network.py baselines/HyperedgePrediction_TD/dataset_utils/mat_utils.py baselines/HyperedgePrediction_TD/model_evaluation_1.py gen_candset.py baselines/HyperedgePrediction_TD/network.py baselines/HyperedgePrediction_TD/hypergraph_utils.py baselines/HyperedgePrediction_TD/dataset_utils/aminer_network.py baselines/HyperedgePrediction_TD/__init__.py hy_utils.py realdata_main.py baselines/HyperedgePrediction_TD/measures.py all_models.py common_neigh hypred_ndp_cand hypred_prop_teneig Katz main_model get_candset_muni sample_neqcliques_m sample_mns get_candset sample_neqcliques sample_neqcliques_m_degree rm_hy_main f1_score_self hy_reduction get_incidence_matrix compute_hm_f1_score get_incidence_matrix_w am_gm_score compute_auc flatten get_incidence_from_hy get_nodeid compute_f1_main hedges_to_pg intersection harmonic_mean rm_hy plot_hyedgefreq get_largest_cc get_hy_specific_card get_hyperedges_from_incidence get_amazon_network get_realdata predict_hyperedge get_pref_node compute_NHAS get_hyedges_from_indices get_incidence_matrix validate_hyedges get_nodeid get_adj_matrix compute_avg_f1_score compute_precision compute_auc compute_f1_score compute_model_f1_score get_hyedges_degree_dist get_missing_hyedges_indices get_hra_scores get_network main model_performance get_amazon_network prune_hyedges get_aminer_cocitation get_file_paths get_aminer_coreference get_aminer_network get_cocitation_network get_coreference_network get_cora_citeseer_network get_dblp_network store_as_mat get_largest_cc get_movielens_network get_twitter_network reciprocal asarray minimize squeeze transpose rand dot flatten spdiags diagonal sum x dot transpose dot transpose inv identity get_hyedges_degree_dist get_hra_scores Katz hypred_ndp_cand hypred_prop_teneig common_neigh list subgraph sort tuple nodes choice set add sample range sum list asarray subgraph sort squeeze tolist tuple nodes apply set get_incidence_from_hy choice add sample DataFrame range int list tolist nodes apply set edges DataFrame union range combinations list tolist map merge apply sample_mns DataFrame fillna sample_neqcliques_m_degree len groupby list hy_reduction tolist get_candset_muni apply DataFrame keys groupby list sample_neqcliques_m apply DataFrame keys len roc_curve auc abs power sum prod len lil_matrix enumerate array get_nodeid lil_matrix get_nodeid lil_matrix enumerate int list asarray DataFrame squeeze min tolist map merge apply get_incidence_from_hy mean sample sum values len groupby list tolist apply DataFrame keys rm_hy len intersection compute_f1_score len groupby list compute_hm_f1_score tolist apply harmonic_mean DataFrame keys transpose from_scipy_sparse_matrix dot diagonal spdiags combinations add_edge Graph print append enumerate has_edge add_edge remove list Graph print nodes index add_nodes_from max range len update list get_largest_cc get_incidence_matrix_w tolist set apply DataFrame len DataFrame apply list append loadmat range len tocsr apply bar title savefig figure transform DataFrame asarray arange squeeze choice sum len append range get_pref_node compute_NHAS choice add set reciprocal asarray subtract squeeze transpose dot spdiags diagonal sum asarray tocsr squeeze set append sum range len tolil append range len set compute_f1_score len list shuffle append float range asarray print squeeze dot sum diags get_adj_matrix int list asarray squeeze Counter append sum keys validate_hyedges get_hyedges_degree_dist get_network predict_hyperedge seed str list compute_avg_f1_score append sum range get_hra_scores set choice get_hyedges_from_indices print sort get_missing_hyedges_indices std len getcwd loadmat print compute_model_f1_score seed add_argument model_performance use_candidateset ArgumentParser K parse_args dataset network append readlines open list readlines add set get_file_paths append open update list readlines set add get_file_paths append open list remove set list prune_hyedges get_aminer_cocitation get_aminer_coreference keys list readlines add set open append split list readlines add set open append split get_cocitation_network get_coreference_network load list sort len set append range open connected_component_subgraphs update list get_movielens_network get_largest_cc get_incidence_matrix get_cora_citeseer_network getcwd get_dblp_network sort set get_twitter_network savemat get_aminer_network get_amazon_network list readlines len prune_hyedges add set split append next keys open int list sort readlines len open append keys range split | # hypred_tensorEVD This repo contains the code for hyperedge prediction using tensor eigenvalue decomposition. We reperesent the hypergraph as tensor and use the Fiedler vector from Laplacian tensor to predict the new hyperedges using the hyperedge score proposed in our work. We compare the results on 5 datasets across three baselines: common neighbour, Katz, and <a href="https://arxiv.org/pdf/2006.11070.pdf">HPRA</a> (a resource allocation based algorithm). The code of HPRA algorithm is directly taken from <a href="https://github.com/darwk/HyperedgePrediction ">this link</a>. For more info about the proposed algorithm, please refer <a href="https://arxiv.org/pdf/2102.04986.pdf">Hyperedge Prediction using Tensor Eigenvalue Decomposition</a>. # Running the code - Run the file 'realdata_main.py' <br> - Please change the dataset using the variable 'realdt_argsinp' <br> - Please change the folder name using variable 'new_folder' for each run <br> The results will be saved in the folder results/<new_folder>/. It will save multiple files | 1,810 |
d909b/perfect_match | ['counterfactual inference'] | ['Perfect Match: A Simple Method for Learning Representations For Counterfactual Inference With Neural Networks'] | perfect_match/models/baselines/gradientboosted.py perfect_match/data_access/ihdp/data_access.py perfect_match/models/baselines/cfr/util.py perfect_match/data_access/patient_generator.py perfect_match/models/pehe_loss.py perfect_match/models/baselines/bart.py perfect_match/apps/main.py perfect_match/models/benchmarks/twins_benchmark.py perfect_match/models/model_eval.py perfect_match/models/model_factory.py perfect_match/apps/parameters.py perfect_match/data_access/propensity_batch.py perfect_match/models/baselines/baseline.py perfect_match/models/benchmarks/news_benchmark.py perfect_match/data_access/mahalanobis_batch.py perfect_match/models/baselines/causal_forest.py perfect_match/models/distributions.py perfect_match/models/benchmarks/ihdp_benchmark.py perfect_match/models/benchmarks/jobs_benchmark.py perfect_match/models/cf_early_stopping.py perfect_match/models/benchmarks/tcga_benchmark.py perfect_match/models/model_builder.py perfect_match/data_access/generator.py perfect_match/models/baselines/neural_network.py perfect_match/models/baselines/knn.py perfect_match/models/baselines/psm.py perfect_match/apps/util.py perfect_match/models/baselines/cfr/cfr_net.py perfect_match/data_access/tcga/data_access.py perfect_match/data_access/jobs/data_access.py perfect_match/models/baselines/ganite_package/ganite_builder.py perfect_match/data_access/twins/data_access.py perfect_match/models/baselines/ganite.py perfect_match/models/baselines/tf_neural_network.py perfect_match/models/baselines/gaussian_process.py perfect_match/apps/run_all_experiments.py perfect_match/models/per_sample_dropout.py perfect_match/apps/evaluate.py perfect_match/models/baselines/random_forest.py perfect_match/models/baselines/ordinary_least_squares.py perfect_match/data_access/batch_augmentation.py setup.py perfect_match/models/baselines/psm_pbm.py perfect_match/models/baselines/ganite_package/ganite_model.py perfect_match/data_access/news/data_access.py EvaluationApplication MainApplication clip_percentage parse_parameters ReadableDir dataset_is_binary_and_has_counterfactuals get_dataset_params model_is_pbm_variant run random_cycle_generator time_function report_duration error resample_with_replacement_generator log get_num_available_gpus BatchAugmentation wrap_generator_with_constant_y make_keras_generator get_last_id_set MahalanobisBatch get_last_row_id report_distribution make_generator PropensityBatch DataAccess DataAccess DataAccess DataAccess convert_array adapt_array DataAccess CounterfactualEarlyStopping safe_sqrt pdist2sq wasserstein calculate_distances calculate_distance ModelBuilder ModelEvaluation ModelFactory ModelFactoryCheckpoint pehe_loss cf_nn pdist2 pehe_nn PerSampleDropout BayesianAdditiveRegressionTrees PickleableMixin Baseline CausalForest GANITE GaussianProcess GradientBoostedTrees KNearestNeighbours NeuralNetwork OrdinaryLeastSquares1 OrdinaryLeastSquares2 PSM PSM_PBM RandomForest NeuralNetwork CFRNet build_mlp safe_sqrt pdist2sq pdist2 save_config get_nonlinearity_by_name wasserstein pop_dist log lindisc mmd2_lin load_data simplex_project validation_split mmd2_rbf load_sparse GANITEBuilder GANITEModel IHDPBenchmark JobsBenchmark NewsBenchmark TCGABenchmark TwinsBenchmark set_defaults add_argument ArgumentParser tolist len format dataset_is_binary_and_has_counterfactuals print model_is_pbm_variant zip get_dataset_params range permutation RandomState randint range len list_local_devices log print log log zeros float sum range len int get_split_indices permutation StratifiedShuffleSplit rint get_labels filter split floor report_distribution make_propensity_lists get_labelled_patients next len BytesIO seek save BytesIO seek transpose reduce_sum square matmul to_float safe_sqrt exp pdist2sq dropout ones concat reduce_max transpose matmul reduce_sum shape reduce_mean stop_gradient gather range to_float zeros concat size cond gather range equal size cond equal transpose reduce_sum square matmul gather argmin pdist2 cf_nn square sqrt reduce_mean gather gather concat range int dropout Variable nonlinearity matmul append zeros range random_normal int list permutation range join close write open load loadtxt load_sparse open int todense loadtxt coo_matrix open safe_sqrt square reduce_sum sign reduce_mean gather gather reduce_mean square reduce_sum to_float exp square reduce_sum gather to_float gather pdist2 to_float safe_sqrt exp pdist2sq dropout ones concat reduce_max transpose matmul reduce_sum shape reduce_mean stop_gradient gather range cumsum list maximum range | ## Perfect Match: A Simple Method for Learning Representations For Counterfactual Inference With Neural Networks  Perfect Match (PM) is a method for learning to estimate individual treatment effect (ITE) using neural networks. PM is easy to implement, compatible with any architecture, does not add computational complexity or hyperparameters, and extends to any number of treatments. This repository contains the source code used to evaluate PM and most of the existing state-of-the-art methods at the time of publication of [our manuscript](https://arxiv.org/abs/1810.00656). PM and the presented experiments are described in detail in our paper. Since we performed one of the most comprehensive evaluations to date with four different datasets with varying characteristics, this repository may serve as a benchmark suite for developing your own methods for estimating causal effects using machine learning methods. In particular, the source code is designed to be easily extensible with (1) new methods and (2) new benchmark datasets. Author(s): Patrick Schwab, ETH Zurich <[email protected]>, Lorenz Linhardt, ETH Zurich <[email protected]> and Walter Karlen, ETH Zurich <[email protected]> License: MIT, see LICENSE.txt #### Citation If you reference or use our methodology, code or results in your work, please consider citing: @article{schwab2018perfect, title={{Perfect Match: A Simple Method for Learning Representations For Counterfactual Inference With Neural Networks}}, | 1,811 |
da03/Attention-OCR | ['optical character recognition'] | ['Image-to-Markup Generation with Coarse-to-Fine Attention'] | src/__init__.py src/data_util/bucketdata.py tmp.py src/data_util/__init__.py src/model/seq2seq_model.py src/exp_config.py src/model/model.py src/launcher.py src/data_util/data_gen.py src/model/cnn.py src/model/seq2seq.py src/model/__init__.py ExpConfig main process_args BucketData test_gen DataGen CNN ConvReluBN max_2x2pool batch_norm ConvRelu dropout max_2x1pool tf_create_attention_map var_random Model model_with_buckets tied_rnn_seq2seq embedding_rnn_decoder sequence_loss_by_example one2many_rnn_seq2seq _extract_argmax_and_embed sequence_loss attention_decoder embedding_attention_seq2seq embedding_attention_decoder embedding_tied_rnn_seq2seq rnn_decoder basic_rnn_seq2seq embedding_rnn_seq2seq Seq2SeqModel parse_args set_defaults add_argument ArgumentParser setFormatter basicConfig addHandler StreamHandler Formatter process_args setLevel INFO print EvalGen gen str get_variable as_list get_shape format print convert_to_tensor assert_is_compatible_with convert_to_tensor assert_is_compatible_with output_size convert_to_tensor assert_is_compatible_with output_size | # Attention-OCR Authours: [Qi Guo](http://qiguo.ml) and [Yuntian Deng](https://github.com/da03) Visual Attention based OCR. The model first runs a sliding CNN on the image (images are resized to height 32 while preserving aspect ratio). Then an LSTM is stacked on top of the CNN. Finally, an attention model is used as a decoder for producing the final outputs.  # Prerequsites Most of our code is written based on Tensorflow, but we also use Keras for the convolution part of our model. Besides, we use python package distance to calculate edit distance for evaluation. (However, that is not mandatory, if distance is not installed, we will do exact match). ### Tensorflow: [Installation Instructions](https://www.tensorflow.org/get_started/os_setup#download-and-setup) (tested on 0.12.1) ### Distance (Optional): ``` wget http://www.cs.cmu.edu/~yuntiand/Distance-0.1.3.tar.gz | 1,812 |
dadung/MCVL | ['visual localization', 'visual place recognition'] | ['Visual Localization Under Appearance Change: Filtering Approaches'] | libs/vlfeat-0.9.21/docsrc/doxytag.py libs/vlfeat-0.9.21/docsrc/mdoc.py libs/vlfeat-0.9.21/docsrc/wikidoc.py libs/vlfeat-0.9.21/docsrc/webdoc.py libs/vlfeat-0.9.21/docsrc/formatter.py Doxytag Terminal Lexer B PL L lex Formatter DL BL E extract towiki depth_first breadCrumb MFile Node runcmd xscan wikidoc usage bullet indent inner_content PL group match DL BL len pid Popen waitpid children group lstrip match startswith append open join addMFile addChildNode print sort MFile Node match listdir __next__ prev runcmd join wikidoc print print insert print readlines close len writelines append range exists open | About ============ MATLAB code of our NCAA 2020 paper: "Visual Localization Under Appearance Change: Filtering Approaches" - NCAA 2020. [Anh-Dzung Doan](https://sites.google.com/view/dzungdoan/home), [Yasir Latif](http://ylatif.github.io/), [Tat-Jun Chin](https://cs.adelaide.edu.au/~tjchin/doku.php), [Yu Liu](https://sites.google.com/site/yuliuunilau/home), [Shin-Fang Ch’ng](https://sites.google.com/view/shinfang-chng/), [Thanh-Toan Do](https://sites.google.com/view/thanhtoando/home), and [Ian Reid](https://cs.adelaide.edu.au/~ianr/). [[pdf]](https://arxiv.org/abs/1811.08063) If you use/adapt our code, please kindly cite our paper. <p align="center"> <b>Comparison results on Oxford RobotCar dataset [3] between PoseNet [1], MapNet [2], and our method</b><br> </p>  [1] Alex Kendall, Matthew Grimes, and Roberto Cipolla, "PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization", in CVPR 2015. | 1,813 |
dadung/Visual-Localization-Filtering | ['visual localization', 'visual place recognition'] | ['Visual Localization Under Appearance Change: Filtering Approaches'] | libs/vlfeat-0.9.21/docsrc/doxytag.py libs/vlfeat-0.9.21/docsrc/mdoc.py libs/vlfeat-0.9.21/docsrc/wikidoc.py libs/vlfeat-0.9.21/docsrc/webdoc.py libs/vlfeat-0.9.21/docsrc/formatter.py Doxytag Terminal Lexer B PL L lex Formatter DL BL E extract towiki depth_first breadCrumb MFile Node runcmd xscan wikidoc usage bullet indent inner_content PL group match DL BL len pid Popen waitpid children group lstrip match startswith append open join addMFile addChildNode print sort MFile Node match listdir __next__ prev runcmd join wikidoc print print insert print readlines close len writelines append range exists open | About ============ MATLAB code of our NCAA 2020 paper: "Visual Localization Under Appearance Change: Filtering Approaches" - NCAA 2020. [Anh-Dzung Doan](https://sites.google.com/view/dzungdoan/home), [Yasir Latif](http://ylatif.github.io/), [Tat-Jun Chin](https://cs.adelaide.edu.au/~tjchin/doku.php), [Yu Liu](https://sites.google.com/site/yuliuunilau/home), [Shin-Fang Ch’ng](https://sites.google.com/view/shinfang-chng/), [Thanh-Toan Do](https://sites.google.com/view/thanhtoando/home), and [Ian Reid](https://cs.adelaide.edu.au/~ianr/). [[pdf]](https://arxiv.org/abs/1811.08063) If you use/adapt our code, please kindly cite our paper. <p align="center"> <b>Comparison results on Oxford RobotCar dataset [3] between PoseNet [1], MapNet [2], and our method</b><br> </p>  [1] Alex Kendall, Matthew Grimes, and Roberto Cipolla, "PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization", in CVPR 2015. | 1,814 |
daemonmaker/neural_artistic_style | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | gatys/Styler.py gatys/Server.py singlenet/model.py utils.py gatys/__init__.py singlenet/script01.py load_img clip_0_1 tensor_to_image gram_matrix imshow vgg_layers index run_styler StyleContentModel style_content_loss train_step make_style_transfer_model res_block load_img clip_0_1 gram_matrix vgg_layers run_styler array float32 convert_image_dtype read_file cast int32 resize max decode_image title squeeze shape cast float32 einsum VGG19 Model add_n assign apply_gradients gradient clip_0_1 layers train_step print name StyleContentModel Variable Adam VGG19 range len Input range res_block make_style_transfer_model | # neural_artistic_style A playground for experimenting with different methods of neural artistic style transfer. Currently, only the Gatys method (i.e. the original methodology, https://arxiv.org/abs/1508.06576) is supported. There are two implementations. The first being the notebook from the TensorFlow tutorial called Neural Style Transfer (https://www.tensorflow.org/tutorials/generative/style_transfer) which is found in `style_transfer.ipynb`. The other is in the notebook `gatys.ipynb` and is merely a refactorization of this code such that the utility functions are in `utils.py` and the logic specific to the the method is found in the `gatys` python module in this repository. ## Docker image The simplest way to get started with this repository is to use docker. Execute the following command to build the docker image: ```bash ./scripts/build_image ``` The docker image is named `neural_artistic_style` and it supports two modes; one that exposes Jupyter notebooks and another that exposes a Flask server. These modes can be activated by executing `./scripts/run_jupyter` and `./scripts/run_flask` respectively. These scripts need to be executed from within the root of this repository because they mount this repository at `/tf` within the container. *Note:* The resulting containers are not retained once they are shutdown. | 1,815 |
daintlab/confidence-aware-learning | ['active learning', 'out of distribution detection'] | ['Confidence-Aware Learning for Deep Neural Networks'] | train.py data.py metrics.py model/resnet.py test.py model/vgg.py crl_utils.py model/densenet_BC.py utils.py main.py History negative_entropy Custom_Dataset one_hot_encoding get_loader main calc_nll aurc_eaurc calc_fpr_aupr calc_metrics get_metric_values calc_ece coverage_risk calc_nll_brier calc_aurc_eaurc main train accuracy AverageMeter Logger DenseNet3 TransitionBlock BottleneckBlock DenseBlock BasicBlock resnet110 resnet20 PreActBasicBlock ResNet_Cifar Bottleneck conv3x3 PreActBottleneck BasicBlock PreAct_ResNet_Cifar make_layers vgg19 VGG vgg16 log_softmax sum softmax data one_hot_encoding print Compose SVHN labels targets DataLoader CIFAR10 Custom_Dataset CIFAR100 get list print map set array data batch_size SGD MultiStepLR Logger save dataset cuda calc_metrics epochs range state_dict save_path join History makedirs write data_path parameters get_loader train step gpu len calc_fpr_aupr get_metric_values calc_ece calc_nll_brier calc_aurc_eaurc aurc_eaurc list sorted array zip coverage_risk max format print roc_curve argmin average_precision_score array abs max format zip print gt mean eq linspace item tensor le max zeros calc_nll logsoftmax format print LogSoftmax mean item tensor sum range zeros_like len append range len print format log eval load format file_name load_state_dict model get_target_margin zero_grad roll max topk rank_weight max_correctness_update correctness_update update format size softmax item enumerate time backward print AverageMeter clone criterion_cls accuracy criterion_ranking write step negative_entropy len topk size t eq mul_ expand_as append sum max PreAct_ResNet_Cifar PreAct_ResNet_Cifar Conv2d | # Confidence-Aware Learning for Deep Neural Networks This repository provides the code for training with *Correctness Ranking Loss* presented in the paper "[Confidence-Aware Learning for Deep Neural Networks](https://arxiv.org/abs/2007.01458)" accepted to ICML2020. ## Getting Started ### Requirements ``` * ubuntu 18.0.4, cuda10 * python 3.6.8 * pytorch >= 1.2.0 * torchvision >= 0.4.0 ``` | 1,816 |
dair-iitd/PoolingAnalysis | ['text classification'] | ['Why and when should you pool? Analyzing Pooling in Recurrent Architectures'] | train.py my_models/utils.py my_models/models.py my_dataloader.py test.py my_utils.py params.py my_models/custom_lstm.py grad_cam.py free_stored_grads gradients_compute compute_norm write_gradients process_gradients gates_compute compute_ratios load_pickle get_wiki_test_data get_data dump_pickle get_ood_test_data get_all_logs categorical_accuracy set_all_seeds_to get_model_path epoch_time parse_args add_config wiki_appended_text evaluate_NWI evaluate text_to_vector count_parameters get_str vector_to_text evaluate_wiki_attack myprint get_wiki_vec evaluate count_parameters iter_func train myprint round_down CustomLSTM RNN total_extra_params to_cuda MyRNN swish_module gelu swish gelu_module tensorize parse_args list linspace interp range len str tolist write f_ht b_ht process_gradients append concatenate_lists join tolist write close open mean average load task list isinstance close extend BucketIterator append Dataset values open load task list isinstance close extend BucketIterator append Dataset values open load list isinstance close extend append values task splits dump print debug close shuffle load_pickle task int splits print LabelField Field float build_vocab eq argmax int seed manual_seed_all manual_seed str nesterov batch_size use_bert wiki data_size lr amsgrad open add_argument ArgumentParser load items list __dict__ isinstance config_file extend open print range print eval zeros len range split randint int choice long get_str t get_wiki_vec item to range cat print eval zeros eval batch_size str write log flush log compute_ratios gradients zero_grad squeeze exit write_gradients gradients_compute copy label flush criterion backward evaluate text iter_func write accuracy step free_stored_grads out_dim fc_dim final_dim hidden_dim use_base FloatTensor index append range len is_available | # Analyzing Pooling in Recurrent Architectures Repository for the paper [Analyzing Pooling in Recurrent Architectures](https://arxiv.org/abs/2005.00159) by [Pratyush Maini](https://pratyush911.github.io), [Kolluru Sai Keshav](https://saikeshav.github.io/), [Danish Pruthi](https://www.cs.cmu.edu/~ddanish/) and [Mausam](http://www.cse.iitd.ac.in/~mausam/) ## Dependencies The code requires the following dependencies to run can be installed using the `conda` environment file provided: ``` conda env create --file environment.yaml ``` ## Running gradients experiments ### Evaluate the Initial gradient distribution ``` | 1,817 |
dakot/probal | ['active learning'] | ['Toward Optimal Probabilistic Active Learning Using a Bayesian Approach'] | src/utils/mathematical_functions.py src/query_strategies/probabilistic_active_learning.py src/query_strategies/active_learnig_with_cost_embedding.py src/base/query_strategy.py src/query_strategies/expected_error_reduction.py src/query_strategies/random_sampling.py src/utils/evaluation_functions.py src/utils/data_functions.py src/utils/mixture.py src/classifier/parzen_window_classifier.py src/query_strategies/expected_probabilistic_active_learning.py src/query_strategies/uncertainty_sampling.py src/query_strategies/query_by_committee.py src/query_strategies/optimal_sampling.py src/evaluation/experimental_setup_csv.py src/query_strategies/mdsp.py src/base/data_set.py DataSet QueryStrategy PWC main run ALCE cost_sensitive_uncertainty EER expected_error_reduction expected_error_reduction_pwc xpal_gain XPAL _smacof_single_p MDSP smacof_p compute_errors_pwc compute_errors OS PAL pal_gain QBC calc_avg_KL_divergence RS US uncertainty_scores load_data compute_statistics misclf_rate eval_perfs euler_beta gen_l_vec_list multinomial_coefficient kernels np_ix Mixture PWC XPAL PAL process_time make_query QBC update_entries floor abspath DataFrame log str ALCE cdist len dirname append US train_test_split sum fit_transform eval_perfs range format DataSet size EER sqrt mean unique startswith power float int deepcopy RS join print OS to_csv kernels load_data eye split transform fit parse_args add_argument ArgumentParser run int zeros kneighbors array range predict fit deepcopy len predict_proba vstack append zeros sum max range enumerate fit max np_ix zeros sum array range enumerate len T format arange print argmin check_array eye zeros sum array enumerate T check_random_state euclidean_distances ones reshape print inv rand copy ravel dot IsotonicRegression sum check_symmetric range fit_transform list check_random_state hasattr check_array argmin copy warn zip _smacof_single_p randint max range deepcopy len misclf_rate zeros range fit misclf_rate unique zeros argmax range len asarray ones reshape argmin euler_beta multinomial_coefficient vstack swapaxes eye tile zeros sum len check_random_state join list check_X_y compress LabelEncoder dirname abspath fit_transform array read_csv values fit items list predict append perf_func mean array std len pop str append arange range array | # Toward Optimal Probabilistic Active Learning Using a Bayesian Approach ## Project Structure - data: contains .csv-files of data sets being not available at [OpenML](https://www.openml.org/home) - data_set_ids.csv: contains the list of data sets with their IDs at [OpenML](https://www.openml.org/home) - images: path where the visualizations of the results and utility plots will be saved - src: Python package consisting of several sub-packages - base: implementation of DataSet and QueryStrategy class - classifier: implementation of Parzen Window Classifier (PWC) - evaluation: scripts for experimental setup and SLURM - notebooks: jupyter notebooks for the investigation of the different query strategies | 1,818 |
dalab/web2text | ['information retrieval'] | ['Web2Text: Deep Structured Boilerplate Removal'] | src/main/python/forward.py src/main/python/viterbi.py other_frameworks/bte/bte.py src/main/python/config.py src/main/python/main.py src/main/python/shuffle_queue.py src/main/python/data.py src/main/python/data/convert_scala_csv.py html2text bte html_entities find_paragraphs tokenise preclean usage token_value Config assert_raises _get_variable edge unary conv loss get_batch train_edge classify evaluate_edge test_structured main train_unary evaluate_unary ShuffleQueue softmax viterbi cli join bte find_paragraphs tokenise preclean append sub html_entities sub append token_value range len startswith append lower search fn Config convert_to_tensor len Config convert_to_tensor len REGULARIZATION_LOSSES get_collection sparse_softmax_cross_entropy_with_logits reduce_mean add_n scalar l2_regularizer _get_variable dropout truncated_normal_initializer argv train_edge print exit test_structured classify train_unary float prediction_fn enumerate prediction_fn enumerate ShuffleQueue minimize reshape reuse_variables get_collection float32 unary placeholder int64 Saver global_variables_initializer loss ShuffleQueue minimize reshape reuse_variables edge float32 placeholder get_collection int64 Saver global_variables_initializer loss reuse_variables edge unary float32 placeholder get_collection Saver global_variables_initializer genfromtxt constant astype unary float32 edge get_collection Saver global_variables_initializer zeros range takeOne random_integers exp max log softmax zeros argmax range genfromtxt docs list echo save array | # Web2Text Source code for [Web2Text: Deep Structured Boilerplate Removal](https://arxiv.org/abs/1801.02607), full paper at ECIR '18 ## Introduction This repository contains * Scala code to parse an (X)HTML document into a DOM tree, convert it to a CDOM tree, interpret tree leaves as a sequence of text blocks and extract features for each of these blocks. * Python code to train and evaluate unary and pairwise CNNs on top of these features. Inference on the hidden Markov model based on the CNN output potentials can be executed using the provided implementation of the Viterbi algorithm. * The [CleanEval](https://cleaneval.sigwac.org.uk) dataset under `src/main/resources/cleaneval/`: - `orig`: raw pages - `clean`: reference clean pages - `aligned`: clean content aligned with the corresponding raw page on a per-character basis using the alignment algorithm described in our paper | 1,819 |
danielegrattarola/ccm-aae | ['link prediction'] | ['Adversarial Autoencoders with Constant-Curvature Latent Manifolds'] | src/mnist.py src/geometry.py get_ccm_distribution hyperbolic_clip hyperbolic_uniform is_hyperbolic _ccm_uniform clip hyperbolic_distance get_distance spherical_uniform _ccm_normal belongs log_map ccm_normal ccm_uniform is_spherical spherical_normal exp_map euclidean_distance spherical_clip spherical_distance CCMMembership hyperbolic_inner hyperbolic_normal sample_points mean_pred normal spherical_clip sqrt uniform sum sign append _ccm_uniform normal exp_map normal zeros abs ndarray isinstance _ccm_normal zip append len norm copy arange delete copy sqrt sum dot T eye hyperbolic_inner zeros abs ndarray isinstance zeros abs ndarray isinstance reshape hstack shuffle unique append | This is the official implementation of the paper "Adversarial Autoencoders with Constant-Curvature Latent Manifolds" by D. Grattarola, C. Alippi, and L. Livi. (2018, [https://arxiv.org/abs/1812.04314](https://arxiv.org/abs/1812.04314)). This code showcases the general structure of the methodology used for the experiments in the paper, and allows to reproduce the results on MNIST (the other two applications are conceptually similar, but the code was much more messy). Please cite the paper if you use any of this code for your research: ``` @article{grattarola2019adversarial, title={Adversarial autoencoders with constant-curvature latent manifolds}, author={Grattarola, Daniele and Livi, Lorenzo and Alippi, Cesare}, journal={Applied Soft Computing}, volume={81}, pages={105511}, | 1,820 |
danielenricocahall/Keras-Weighted-Hausdorff-Distance-Loss | ['object localization'] | ['Locating Objects Without Bounding Boxes'] | test/test_loss.py hausdorff/hausdorff.py cdist weighted_hausdorff_distance test_hausdorff_loss_diff test_cdist test_hausdorff_loss_match reshape maximum square reduce_sum matmul sqrt convert_to_tensor sqrt cartesian convert_to_tensor cdist expand_dims astype expand_dims astype | # Weighted Hausdorff Distance Loss # In this repository, you'll find an implementation of the weighted Hausdorff Distance Loss, described here (https://arxiv.org/abs/1806.07564). A majority of the work was just porting their PyTorch implementation (https://github.com/HaipengXiong/weighted-hausdorff-loss). I figured some researchers/practitioners that are doing object detection/localization may find this useful! # Setup `pipenv install .` should configure a python environment and install all necessary dependencies in the environment. # Testing Some tests verifying basic components of the loss function have been incorporated. Run `python -m pytest` in the repo to execute them. ## TODO ## Add an example script. | 1,821 |
danielenricocahall/One-Class-NeuralNetwork | ['anomaly detection'] | ['Anomaly Detection using One-Class Neural Networks'] | ocnn.py test/test_basic.py driver.py loss.py main quantile_loss OneClassNeuralNetwork test_loss_function test_build_model add_subplot show use set_xlabel ylabel title scatter legend predict epoch plot set_zlabel train_model pop T ListedColormap xlabel File OneClassNeuralNetwork set_ylabel figure len convert_to_tensor OneClassNeuralNetwork build_model | # One-Class-NeuralNetwork Simplified Keras implementation of one class neural network for nonlinear anomaly detection. The implementation is based on the approach described here: https://arxiv.org/pdf/1802.06360.pdf. I've included several datasets from ODDS (http://odds.cs.stonybrook.edu/) and the Wine Dataset from UCI (https://archive.ics.uci.edu/ml/datasets/wine) to play with. # Setup `pipenv install .` should configure a python environment and install all necessary dependencies in the environment. # Running Running `python kdd_cup.py` or `python wine.py` within your new python environment (either through CLI or IDE) should kick off training on the KDD Cup dataset epochs and generate some output plots. # Testing Two unit tests are defined in `test/test_basic.py`: building the model, and the quantile loss test based on example in the paper:  | 1,822 |
danielgordon10/thor-iqa-cvpr-2018 | ['visual question answering'] | ['IQA: Visual Question Answering in Interactive Environments'] | qa_agents/qa_agent.py generate_questions/generate_existence_questions.py supervised/train_navigation_agent.py depth_estimation_network/models/network.py question_embedding/parse_question.py layouts/make_layout_files.py thor_tests/run_all_tests.py question_embedding/train_question_embedding.py thor_tests/test_image_overlays.py generate_questions/questions.py thor_tests/test_random_initialize.py utils/question_util.py generate_questions/generate_contains_questions.py train.py darknet_object_detection/detector.py thor_tests/speed_test.py networks/question_embedding_network.py supervised/sequence_generator.py graph/batchGraphGRU.py reinforcement_learning/a3c_testing_thread.py networks/free_space_network.py constants.py networks/rl_network.py utils/bb_util.py reinforcement_learning/a3c_test.py graph/graph_obj.py reinforcement_learning/a3c_training_thread.py thor_tests/__init__.py qa_agents/graph_agent.py utils/tf_util.py generate_questions/episode.py generate_questions/generate_counting_questions.py generate_questions.py test_thor.py networks/end_to_end_baseline_network.py tasks.py depth_estimation_network/models/fcrn.py qa_agents/end_to_end_baseline_agent.py utils/action_util.py networks/qa_planner_network.py depth_estimation_network/depth_estimator.py supervised/semantic_map_pretrain.py run_thor_tests.py eval.py human_controlled_test.py utils/game_util.py game_state.py question_to_text.py utils/drawing.py reinforcement_learning/rmsprop_applier.py utils/py_util.py reinforcement_learning/a3c_train.py generate_questions/combine_hdf5.py __init__.py depth_estimation_network/models/__init__.py GameState QuestionGameState PersonGameState main startx pci_records generate_xorg_conf ObjectDetector visualize_detections setup_detectors get_detector DepthEstimator get_depth_estimator ResNet50UpProj layer interleave get_incoming_shape Network combine Episode main main main ListQuestion ExistenceQuestion Question CountQuestion BatchGraphGRUCell Graph EndToEndBaselineNetwork FreeSpaceNetwork QAPlannerNetwork QuestionEmbeddingNetwork DeepQNetwork RLNetwork A3CNetwork EndToEndBaselineGraphAgent GraphAgent RLGraphAgent QAAgent tokenize_sentence get_sequences vocabulary_size shuffle_data run main A3CTestingThread run A3CTrainingThread RMSPropApplier run SequenceGenerator run main run_tests test_movement_speed test_random_initialize_speed assert_image test_depth_and_ids_images run_tests test_random_initialize run_tests test_random_initialize_with_remove_prob assert_failed_action test_random_initialize_randomize_open assert_successful_action test_random_initialize_with_repeats ActionUtil make_square xywh_to_xyxy xyxy_to_xywh scale_bbox clip_bbox subplot visualize_detections draw_detection_box drawRect get_question_str pretty_action distance choose_action_q unique_rows get_objects_of_type check_object_size get_rotation_matrix get_object_bounds choose_action set_open_close_object depth_to_world_coordinates imresize get_object_point get_action_str object_size get_pose reset get_object object_center_position create_env depth_imresize encode decode get_time_str get_question_str cond_scope restore l2_regularizer split_axis kernel_to_image variable_summaries remove_axis_get_shape empty_scope conv_variable_summaries save split_axis_get_shape restore_from_dir remove_axis Session update str sorted int get_question_str glob print File write len set split OBJECTS bool max flush open append decode split append join format int list mkstemp call split astype copy draw_detection_box int32 enumerate append ObjectDetector range acquire release Session Tensor isinstance dtype list str remove print cumsum insert File extend shape rename create_dataset append keys exists enumerate Process join create_dump TEST_SCENE_NUMBERS start sleep ceil round range append TRAIN_SCENE_NUMBERS MAX_SENTENCE_LENGTH len split append int tokenize_sentence items list permutation arange copy save Session str sorted list restore BATCH_SIZE GPU_ID get_time_str append range get glob TEST_SET FileWriter mean LOG_FILE flush join items time train_op graph print extend QuestionEmbeddingNetwork create_network finalize shuffle_data add_summary global_variables_initializer create_train_ops OBJECT_DETECTION USED_QUESTION_TYPES setup_detectors Lock list get_time_str Thread TEST_SET close shuffle LOG_FILE zip extend finalize PARALLEL_SIZE makedirs OBJECT_DETECTION TRAINABLE_VARIABLES conv_variable_summaries Saver restore_from_dir USED_QUESTION_TYPES setup_detectors MAX_TIME_STEP Lock merge_all get_collection placeholder USE_NAVIGATION_AGENT RMSPropApplier Thread close PREDICT_DEPTH start load_weights mkdir RUN_TEST merge sync A3CTestingThread File makedirs float32 CHECKPOINT_DIR A3CTrainingThread reset create_dataset PARALLEL_SIZE len randint where flatten argmax max add_run_metadata subplot STEPS_AHEAD ones tolist set_trace waitKey imshow sum concatenate ascontiguousarray choice copy SCREEN_HEIGHT flipud zeros enumerate RunOptions int DRAWING SCREEN_WIDTH sort reshape maximum min RunMetadata release add acquire sleep GRU_SIZE SequenceGenerator set remove NUM_CLASSES NUM_UNROLLS split create_env run_tests time print reset random_initialize range time print reset random_initialize step range create_env test_movement_speed test_random_initialize_speed compare_images_for_scene print test_depth_and_ids_images step step dict reset assert_failed_action random_initialize step assert_successful_action dict reset assert_failed_action random_initialize step assert_successful_action dict reset random_initialize step assert_successful_action dict reset random_initialize step assert_successful_action test_random_initialize test_random_initialize_with_remove_prob test_random_initialize_randomize_open test_random_initialize_with_repeats clip isinstance astype float32 shape zeros clip_bbox isinstance astype float32 shape zeros clip_bbox Number isinstance astype copy array tile full clip_bbox isinstance astype maximum float32 zeros floor resize max fromarray pad ceil COLORMAP_JET range truetype astype full int uint8 Draw applyColorMap putText text float32 array int squeeze round array int list tuple putText getTextSize map rectangle hexdigest print start Controller dict step abs pose deepcopy dtype view unique copy exp EVAL astype float32 resize astype float32 resize cos matmul pi sin matrix T AGENT_STEP_SIZE SCREEN_WIDTH arange reshape inv SCREEN_HEIGHT stack FOCAL_LENGTH get_rotation_matrix maximum clip astype int32 clip array int32 astype SCREEN_WIDTH SCREEN_HEIGHT array clip int T argmin square sqrt sum array localtime seed randint as_list int ndims tuple squeeze transpose reshape reduce_max sqrt pad ceil reduce_min range variable_summaries minimum join sorted NewCheckpointReader list as_list int print set shape assign Saver append keys get_variable_to_shape_map run get_checkpoint_state model_checkpoint_path restore print print join makedirs pop insert | # THOR-IQA-CVPR-2018  This repository contains the code for training and evaluating the various models presented in the paper [IQA: Visual Question Answering in Interactive Environments](https://arxiv.org/pdf/1712.03316.pdf). It also provides an interface for reading the questions and generating new questions if desired. ## If all you want is IQUAD-V1: IQUAD, the Interactive Question Answering Dataset, is included in this repository. If you do not want the rest of the code, then you may find what is included in the [questions](questions) folder sufficient as long as you also have a working version of THOR. You should be able to set up THOR using the simple pip install command, but you may want to install the exact version specified in the [requirements.txt](reqirements.txt). [questions](questions) contains three sub-folders: | 1,823 |
danielhers/synsem | ['semantic parsing'] | ['Content Differences in Syntactic and Semantic Representation', 'Content Differences in Syntactic and Semantic Representations'] | lost_participants.py compare_yields.py nested_conj.py extract_raw_counts.py secondary_verbs.py eval_conj.py extract_eval.py calc_len.py lost_scenes.py eval_depcl.py AverageLengthCalculator Evaluator ConjunctEvaluator DependentClauseEvaluator eval_ref Data Report eval_model_features strip combine main eval_corpus split eval_ref eval_model_features main get_columns eval_corpus Scene LostParticipantEvaluator LostSceneEvaluator NestedConjunctEvaluator SecondaryVerbEvaluator eval_ref groupby sorted list attrgetter print map set groupby sorted attrgetter eval_model_features print astype map dict zip list replace print tolist combine list getattr combine_first replace eval_corpus groupby sorted attrgetter ref name filter get_columns print sorted set join product Series astype map get_columns | # Content Differences in Syntactic and Semantic Representations This is code for the experiments and analysis in the [paper](https://www.aclweb.org/anthology/N19-1047): ``` @inproceedings{hershcovich2019content, title = "Content Differences in Syntactic and Semantic Representation", author = "Hershcovich, Daniel and Abend, Omri and Rappoport, Ari", booktitle = "Proc. of NAACL-HLT", url = "https://www.aclweb.org/anthology/N19-1047", | 1,824 |
danielhers/tupa | ['semantic parsing'] | ['A Transition-Based Directed Acyclic Graph Parser for UCCA', 'Multitask Parsing Across Semantic Representations'] | tupa/parse.py tupa/classifiers/nn/constants.py tupa/features/dense_features.py tupa/states/state.py tests/test_model.py tupa/states/edge.py tupa/features/empty_features.py tests/test_parser.py tupa/traceutil.py tests/test_oracle.py tupa/config.py docs/conf.py tupa/model_util.py tupa/oracle.py tupa/scripts/load_save.py tupa/classifiers/nn/util.py tupa/classifiers/nn/neural_network.py tupa/model.py tupa/features/sparse_features.py tupa/__main__.py tupa/scripts/viz.py server/parse_server.py tupa/states/node.py tupa/classifiers/linear/sparse_perceptron.py tupa/scripts/dump_vocab.py tupa/scripts/export.py tupa/classifiers/nn/sub_model.py tupa/scripts/tune.py tupa/features/feature_params.py tupa/scripts/visualize_learning_curve.py tupa/scripts/conll18_ud_eval.py tests/conftest.py tupa/classifiers/noop.py tupa/features/feature_extractor.py tupa/scripts/strip_multitask.py tupa/action.py tupa/scripts/enum_to_json.py tests/test_config.py tests/test_features.py setup.py tupa/classifiers/classifier.py tupa/__version__.py tupa/labels.py tupa/classifiers/nn/birnn.py tupa/classifiers/nn/mlp.py install Mock parse download parser_demo get_parser visualize config pytest_addoption empty_features_config passage_files basename write_oracle_actions Settings default_setting weight_decay pytest_generate_tests write_features create_config load_passage remove_existing test_passage assert_all_params_equal test_boolean_params config test_params test_hyperparams _test_features test_features_conllu test_feature_templates FeatureExtractorCreator test_features feature_extractors extract_features test_model parse test_oracle gen_actions test_train_empty test_ensemble test_extra_classifiers test_parser test_copy_shared test_empty_features test_iterations Action Actions Config FallbackNamespace Iterations Hyperparams add_param_arguments HyperparamsInitializer Labels ParameterDefinition Model ClassifierProperty DropoutDict DefaultOrderedDict remove_backup load_enum KeyBasedDefaultDict AutoIncrementDict jsonify Strings load_dict UnknownDict save_dict IdentityVocab remove_existing save_json Lexeme Vocab load_json is_terminal_edge is_implicit_node is_remote_edge Oracle print_scores generate_and_len to_lower_case read_passages single_to_iter Parser PassageParser filter_passages_for_bert get_eval_type from_text_format get_output_converter AbstractParser train_test ParserException ParseMode main BatchParser percents_str main_generator average_f1 tracefunc set_tracefunc set_traceback_listener print_traceback dict_value Classifier NoOp FeatureWeights FeatureWeightsCreator SparsePerceptron HighwayRNN HierarchicalBiRNN BiRNN EmptyRNN CategoricalParameter MultilayerPerceptron NeuralNetwork AxisModel SubModel randomize_orthonormal DenseFeatureExtractor EmptyFeatureExtractor dep_distance static_vars height FeatureTemplateElement FeatureTemplate gap_type get_punctuation head_terminal gap_lengths calc prop_getter FeatureExtractor head_terminal_height has_gaps gap_length_sum FeatureParameters NumericFeatureParameters SparseFeatureExtractor UDError _decode evaluate load_conllu load_conllu_file TestAlignment main _encode main main decode main save_model load_model main main delete_if_exists strip_multitask sample main get_values_based_on_format Params main load_scores visualize main smooth visualize Edge Node State InvalidActionError print Parser decode print to_standard next from_text decode BytesIO print FigureCanvasAgg fromstring from_standard draw close get_data print_png figure print addoption getoption parametrize Config update update_hyperparams update create_config read_files_and_dirs items sorted assert_allclose glob remove Config update update_hyperparams update items list update items list update_hyperparams items list params append need_label label_node values set_format list feature_extractor_creator get_label transition load_passage extract_features finished annotate State items join min init_param Actions Oracle _test_features _test_features feature_extractor_creator join set_format str argmax init_features update init_model finished_step State extract_features set_format finished_item update join load parse sorted items dict Model finalize remove_existing save range assert_all_params_equal all_params load_passage update dict set_format str finished label_node min need_label get_label transition Actions State Oracle values model tuple Parser weight_decay all_params list map append update parse passage_files init zip assert_allclose print dict average_f1 remove_existing train assert_all_params_equal update list passage_files print map dict Parser remove_existing train update list passage_files map update_hyperparams dict Parser remove_existing train update list parse passage_files map dict remove_existing train enumerate update list parse passage_files map dict Parser remove_existing train update list passage_files isinstance map dict remove_existing Iterations append train update dict remove_existing Action update add_argument_group get_group_arg_names add_mutually_exclusive_group add add_boolean ArgParser print copy2 glob remove setrecursionlimit time remove_existing print print time try_load print remove_existing print print_scores parse print Scores Parser testscores get lower all from_text from_pretrained join get str bert_model print ID tokenize len join list passages print train_test Scores shuffle read_passages append folds range args print main_generator list set_traceback_listener print setprofile dump_traceback signal randn shape set_value svd child get_terminals any has_gaps list map sorted prop_getter rstrip UDWord list _decode len map add UDSpan append UDRepresentation range readline process_word words set startswith int join extend split align_words words open system_files format evaluate system_total add_argument load_conllu_file gold_file precision correct gold_total ArgumentParser f1 parse_args recall enumerate counts model_name isinstance data lang get_vocab suffix load_model models params save_json filename ArgParser values load Model print savez_compressed save_model save difference delete_if_exists join basename keep out_dir strip_multitask makedirs get int seed map run max print list plot xlabel ylabel title clf savefig legend range len load_scores splitext visualize reshape subplots smooth values all_params set_major_locator ndarray colorbar pcolormesh OrderedDict delaxes append MaxNLocator tight_layout enumerate items isinstance tqdm sub sca | Transition-based UCCA Parser ============================ TUPA is a transition-based parser for [Universal Conceptual Cognitive Annotation (UCCA)][1]. ### Requirements * Python 3.6+ ### Install Install the latest release: pip install tupa[bert] Alternatively, install the latest code from GitHub (may be unstable): pip install git+git://github.com/danielhers/tupa.git#egg=tupa | 1,825 |
danieljf24/dual_encoding | ['video retrieval'] | ['Dual Encoding for Zero-Example Video Retrieval'] | util/txt2bin.py basic/util.py loss.py basic/generic_utils.py tv-avs-eval/trec_eval.py evaluation.py util/format_check.py util/get_frameInfo.py trainer.py basic/bigfile.py tester.py basic/constant.py basic/metric.py util/vocab.py util/combine_features.py basic/common.py model.py util/text2vec.py tv-avs-eval/txt2xml.py predictor.py util/data_provider.py i2t encode_data eval_varied i2t_varied i2t_map l2norm t2i_varied encode_text_or_vid t2i_map t2i_inv_rank t2i cal_error i2t_inv_rank_multi i2t_inv_rank cosine_sim TripletLoss euclidean_sim order_sim get_model get_we_parameter Video_multilevel_encoding MFC l2norm xavier_init_fc Text_multilevel_encoding BaseModel Dual_Encoding main parse_args encode_data main parse_args load_config validate get_learning_rate accuracy save_checkpoint main train decay_learning_rate validate_split parse_args BigFile StreamFile printError checkToSkip printMessage makedirsforfile printStatus custom_object_scope func_load has_arg CustomObjectScope func_dump serialize_keras_object get_custom_objects Progbar deserialize_keras_object PrecisionScorer MetricScorer getScorer NDCGScorer APScorer RRScorer DCGScorer AverageMeter read_dict LogCollector Progbar getVideoId write_dict main xml_to_treceval process parse_result wrap_topic_result main process read_topics main process collate_text collate_frame_gru_fn TxtDataSet4DualEncoding get_data_loaders VisDataSet4DualEncoding get_txt_data_loader get_test_data_loaders get_train_data_loaders get_vis_data_loader Dataset4DualEncoding collate_frame read_video_ids main process main process read_dict write_dict Text2Vec AveWord2Vec get_text_encoder Bow2Vec main process checkToSkip Vocabulary from_txt clean_str main from_flickr_json build_vocab norm cdist T l2norm dot update time format logging AverageMeter copy forward_emb LogCollector zeros dataset val_start enumerate len copy add Progbar encoder dataset zeros enumerate len median argsort mean floor zeros range len median argsort mean floor zeros range len getScorer score append range len getScorer score append range len argsort zeros sum range len argsort zeros sum range len list append zeros i2t_inv_rank range median mean floor zeros range len argsort zeros range shape argsort zeros range shape size expand size expand BigFile print read_one append ndims range len sqrt div fill_ in_features sqrt out_features uniform_ add_argument ArgumentParser add Progbar encoder workers embed_txt batch_size BigFile trainCollection embed_vis overwrite get_vis_data_loader val_start open rootpath str exit load_state_dict logger_name parse_args visual_feature vocab checkpoint_name format replace cv_name readlines read_dict checkToSkip info vars setattr load join testCollection valCollection time encode_data T print write dumps get_txt_data_loader system dot argsort split makedirsforfile len read exec compile t2i_varied get_test_data_loaders save round list i2t_map t2i_map range i2t log_step encode_text_or_vid startswith n_caption read_video_ids i2t_varied measure t2i cal_error lr_decay_rate validate visual_kernel_num postfix get_train_data_loaders save_checkpoint validate_split max basicConfig map decay_learning_rate visual_feat_dim configure get_we_parameter visual_rnn_size get_data_loaders close text_kernel_num resume num_epochs visual_kernel_sizes text_kernel_sizes flush optimizer text_rnn_size isfile bow_vocab_size train ndims update val time train_start AverageMeter train_emb add LogCollector log_value tb_log dataset Progbar enumerate len i2t encode_data format i2t_varied print log_step i2t_map t2i_varied set add t2i_map log_value t2i startswith info append cal_error i2t embed_txt format i2t_varied print i2t_map t2i_varied embed_vis encode_text_or_vid t2i_map log_value t2i startswith cal_error val_start copyfile remove save param_groups append param_groups topk size t eq mul_ expand_as append sum max makedirs print exists print printMessage printMessage hasattr string_types get hasattr isinstance has_arg from_config decode __defaults__ tuple __code__ dumps __closure__ decode isinstance tuple loads encode globals get getfullargspec list getargspec signature values int split eval read close open str close write open print split join parse write close overwrite getchildren open getroot info append iter exists enumerate xml_to_treceval join rootpath read edition print parse_result dirname collection add_option OptionParser print_help list readlines strip map append split append enumerate pid pclass strip overwrite exists topk list trtype read_topics map etime append desc range debug readlines set priority info float makedirs write split len exit list sort mean long zip zeros max enumerate len list mean zip zeros max enumerate len max list sort zip zeros long enumerate len DataLoader VisDataSet4DualEncoding DataLoader TxtDataSet4DualEncoding feature int sorted names items BigFile checkToSkip makedirsforfile write_dict add isnan tofile close array open int enumerate sub update join Vocabulary add_word from_txt Counter add lower clean_str Progbar enumerate len threshold sort text_style dirname collection build_vocab | # Dual Encoding for Zero-Example Video Retrieval Source code of our CVPR'19 paper [Dual Encoding for Zero-Example Video Retrieval](https://arxiv.org/abs/1809.06181). **Note an improved video-text retrieval model is available [here](https://github.com/danieljf24/hybrid_space).**  ## Requirements #### Environments * **Ubuntu** 16.04 * **CUDA** 9.0 * **Python** 2.7 (For python 3, please checkout `python3` branch) * **PyTorch** 0.3.1 | 1,826 |
danieljf24/text2image | ['image retrieval'] | ['Cross-Media Similarity Evaluation for Web Image Retrieval in the Wild'] | util/queryParser.py simpleknn/simpleknn.py util/tools.py simpleknn/testbigfile.py simpleknn/demo.py simpleknn/im2fea.py visual_detector.py util/irc_query.py simpleknn/test_all.py basic/annotationtable.py basic/metric.py simpleknn/norm_feat.py main.py basic/common.py simpleknn/test_merge_feat.py simpleknn/merge_feat.py text2image.py util/irc_image.py simpleknn/txt2bin.py simpleknn/bigfile.py main process Text2Image gene_imagenet_synset readImageNetSynset VisualDetector readConcepts readAnnotations readQueriesFrom niceName readAnnotationsFrom writeAnnotations writeAnnotationsTo writeConcepts writeConceptsTo readQueries writeRankingResults niceNumber printError total_seconds checkToSkip printMessage readRankingResults makedirsforfile printStatus CmdOptions PrecisionScorer MetricScorer getScorer NDCGScorer APScorer RRScorer DCGScorer BigFile StreamFile main process main process main process fillprototype load_model search_model search_result genFields toPyModel main process checkToSkip ImageSimer calImageSimiByL2 calImageSimiByCos OverlapSimer QuerySimer getQuerySimer claSimQuery2WeightedQ merge_single_chars QueryParser SimpleQueryParser replace writeDCGResult readImageClickture readNnData readQidQuery readQueryClickture writeRankingResult writeDCGResult score readAnnotationsFrom overwrite ntopimg SimpleQueryParser values open rootpath list sorted len metric exit OrderedDict nnqueryfile sum mincc getScorer qrythres ntopqry doSearch checkToSkip printMessage zip float queryclickfile readQidQuery join feature print dict writeRankingResult Text2Image split parse_args add_option OptionParser print_help replace strip write close split open join list gene_imagenet_synset strip len map set open append split niceName join join join makedirs close write open join writeConcepts join makedirs close write open join writeAnnotations join makedirs append readlines join makedirs close write open print exists print printMessage printMessage int split BigFile strip map printStatus makedirsforfile tofile readlines close blocksize read min array add fromfile append set enumerate rstrip nbytes ssr copyfile sqrt range genFields contents load_ids print set_distance toPyModel makedirs isnan write int mat norm mat T norm get split sub list len split zip open list split zip open enumerate len list len split zip open join makedirs close write open join list makedirs len write close sum values open | # text2image The package provides a python implementation of a new text2image baseline for image retrieval and query visualness computation proposed in [2]. ## Requirements ### Required Packages * **python** 2.7 * **NLTK** for query preprocessing Run the following script to install the NLTK. ```shell sudo pip install -U nltk ``` | 1,827 |
danielmatte/3D-ResNets-PyTorch- | ['action recognition'] | ['Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?'] | utils/eval_ucf101.py utils/video_jpg.py utils/n_frames_ckplus.py opts.py models/resnext.py train.py datasets/hmdb51.py dataset.py models/wide_resnet.py models/densenet.py utils/ucf101_json.py utils.py utils/eval_kinetics.py datasets/activitynet.py utils/ckplus_json.py models/pre_act_resnet.py temporal_transforms.py test.py utils/kinetics_json.py datasets/ucf101.py utils/eval_hmdb51.py utils/hmdb51_json.py mean.py datasets/ckplus.py utils/n_frames_ucf101_hmdb51.py datasets/kinetics.py main.py target_transforms.py model.py utils/n_frames_kinetics.py utils/video_eval_ucf101.py utils/video_jpg_ucf101_hmdb51.py utils/fps.py validation.py spatial_transforms.py models/resnet.py utils/video_jpg_kinetics.py get_training_set get_test_set get_validation_set get_std get_mean generate_model parse_opts MultiScaleCornerCrop CenterCrop MultiScaleRandomCrop ToTensor Compose Scale Normalize RandomHorizontalFlip CornerCrop ClassLabel VideoID Compose TemporalBeginCrop LoopPadding TemporalCenterCrop TemporalRandomCrop calculate_video_results test train_epoch calculate_accuracy AverageMeter Logger load_value_file val_epoch modify_frame_indices get_class_labels load_annotation_data video_loader get_end_t make_dataset ActivityNet accimage_loader get_default_image_loader get_default_video_loader make_untrimmed_dataset pil_loader get_video_names_and_annotations get_class_labels load_annotation_data video_loader CKPLUS make_dataset accimage_loader get_default_image_loader get_default_video_loader pil_loader get_video_names_and_annotations get_class_labels load_annotation_data video_loader make_dataset accimage_loader HMDB51 get_default_image_loader get_default_video_loader pil_loader get_video_names_and_annotations get_class_labels load_annotation_data video_loader make_dataset accimage_loader Kinetics get_default_image_loader get_default_video_loader pil_loader get_video_names_and_annotations UCF101 get_class_labels load_annotation_data video_loader make_dataset accimage_loader get_default_image_loader get_default_video_loader pil_loader get_video_names_and_annotations get_fine_tuning_parameters DenseNet densenet201 densenet169 densenet264 _DenseLayer _DenseBlock _Transition densenet121 conv3x3x3 get_fine_tuning_parameters resnet50 downsample_basic_block resnet152 PreActivationBasicBlock resnet34 resnet200 PreActivationBottleneck resnet18 PreActivationResNet resnet101 conv3x3x3 get_fine_tuning_parameters ResNet downsample_basic_block resnet50 Bottleneck resnet152 resnet34 resnet200 resnet18 resnet10 BasicBlock resnet101 ResNeXtBottleneck conv3x3x3 get_fine_tuning_parameters resnet50 downsample_basic_block ResNeXt resnet152 resnet101 conv3x3x3 get_fine_tuning_parameters WideBottleneck resnet50 downsample_basic_block WideResNet getDataset getEmotionFromFile convert_ckplus_to_activitynet_json getVideoPath getAUs HMDBclassification compute_video_hit_at_k get_blocked_videos KINETICSclassification compute_video_hit_at_k UCFclassification compute_video_hit_at_k convert_hmdb51_csv_to_activitynet_json get_labels convert_csv_to_dict load_labels convert_kinetics_csv_to_activitynet_json convert_csv_to_dict subject_process class_process class_process load_labels convert_ucf101_csv_to_activitynet_json convert_csv_to_dict class_process class_process video_path UCF101 CKPLUS ActivityNet Kinetics HMDB51 annotation_path video_path UCF101 n_val_samples CKPLUS ActivityNet Kinetics HMDB51 annotation_path video_path UCF101 CKPLUS ActivityNet Kinetics HMDB51 annotation_path get_fine_tuning_parameters in_features densenet264 DataParallel ft_begin_index resnet34 resnet152 cuda load_state_dict resnet200 resnet101 resnet18 format resnet50 resnet10 n_finetune_classes Linear load densenet169 densenet201 print pretrain_path densenet121 parse_args set_defaults add_argument ArgumentParser topk size mean stack append range update time format model print Variable cpu AverageMeter size eval softmax calculate_video_results append range enumerate len data model zero_grad save cuda log update format size enumerate join time result_path criterion backward print Variable calculate_accuracy AverageMeter train step len data topk view size t eq update data time format criterion model print Variable calculate_accuracy AverageMeter size eval cuda log enumerate len join format image_loader append exists get_default_image_loader append enumerate append items list format append join format items list format join get_class_labels deepcopy load_annotation_data print modify_frame_indices len load_value_file ceil max range append get_video_names_and_annotations sort listdir items list format join get_class_labels deepcopy load_annotation_data print modify_frame_indices len load_value_file get_end_t ceil max range append get_video_names_and_annotations print int min DenseNet DenseNet DenseNet DenseNet append format range named_parameters data isinstance FloatTensor Variable zero_ avg_pool3d cuda cat PreActivationResNet PreActivationResNet PreActivationResNet PreActivationResNet PreActivationResNet PreActivationResNet ResNet ResNet ResNet ResNet ResNet ResNet ResNet ResNeXt ResNeXt ResNeXt WideResNet glob format join isdir format glob getEmotionFromFile getVideoPath getAUs append getDataset reset_index size tolist mean unique zeros values enumerate Request urlopen format ceil join read_csv append listdir range len append join listdir update get_labels convert_csv_to_dict read_csv update load_labels convert_csv_to_dict join int print append listdir len join int print sort append listdir split append range update load_labels convert_csv_to_dict format call mkdir splitext exists | # 3D ResNets for Action Recognition ## Update (2018/2/21) Our paper "Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?" is accepted to CVPR2018! We update the paper information. ## Update (2018/01/16) We uploaded some of fine-tuned models on UCF-101 and HMDB-51. * ResNeXt-101 fine-tuned on UCF-101 (split1) * ResNeXt-101 (64 frame inputs) fine-tuned on UCF-101 (split1) * ResNeXt-101 fine-tuned on HMDB-51 (split1) * ResNeXt-101 (64 frame inputs) fine-tuned on HMDB-51 (split1) | 1,828 |
danielt17/Deep-Neural-Trees-Simulation-Environment | ['active learning'] | ['Stealing Black-Box Functionality Using The Deep Neural Tree Architecture'] | BlackBoxDigitalChipSimulationEnvironment.py get_data blackbox2 exp zeros_like sin zeros range blackbox2 | # Deep-Neural-Trees-Black-Box-Simulation-Environment The following sumulation environment sumulates the digital chip found in the paper: Stealing Black-Box Functionality Using The Deep Neural Tree Architecture. Digital chip description:  Requirements: Python 3.6.9, Numpy 1.16.4 | 1,829 |
danielzuegner/netgan | ['link prediction', 'graph generation'] | ['NetGAN: Generating Graphs via Random Walks'] | setup.py netgan/utils.py netgan/netgan.py gumbel_softmax_sample make_noise NetGAN sample_gumbel gumbel_softmax RandomWalker edges_to_sparse statistics_degrees statistics_LCC statistics_square_count train_val_test_split_adjacency statistics_triangle_count graph_from_scores symmetric largest_connected_components statistics_claw_count edge_overlap squares load_npz statistics_compute_cpl statistics_edge_distribution_entropy random_walk statistics_wedge_count statistics_gini compute_graph_statistics statistics_power_law_alpha statistics_cluster_props score_matrix_from_random_walks print format random_normal random_uniform random_uniform shape sample_gumbel dtype reduce_max gumbel_softmax_sample cast stop_gradient equal bincount connected_components print format ones permutation arange eliminate_zeros edges_to_sparse warn flatten row_stack maximal_matching nnz column_stack seed list tocsr DiGraph map append set nonzero int A1 symmetrize minimum_spanning_tree T maximum difference any randint array len list reshape transpose coo_matrix row_stack zip array int cumsum rand set choice append sum array range len triu_indices_from copy choice shape round zeros triu sum symmetric T cliques vcount sum unique sum sum list triangles from_numpy_matrix sum values as_undirected sum sort sum array float sum square log mean get_blocks csr_matrix shortest_path to_undirected statistics_LCC statistics_degrees statistics_wedge_count statistics_square_count copy degree_assortativity_coefficient statistics_gini statistics_claw_count statistics_triangle_count statistics_compute_cpl statistics_edge_distribution_entropy statistics_power_law_alpha statistics_cluster_props | # NetGAN: Generating Graphs via Random Walks <p align="center"> <img src="https://www.in.tum.de/fileadmin//w00bws/daml/netgan/netgan.png" width="400"> </p> Implementation of the method proposed in the paper: **[NetGAN: Generating Graphs via Random Walks](https://arxiv.org/abs/1803.00816)** by Aleksandar Bojchevski, Oleksandr Shchur, Daniel Zügner, Stephan Günnemann Published at ICML 2018 in Stockholm, Sweden. Copyright (C) 2018 Daniel Zügner | 1,830 |
danikiyasseh/SoCal | ['active learning', 'time series'] | ['SoQal: Selective Oracle Questioning for Consistency Based Active Learning of Cardiac Signals'] | run_experiment.py prepare_dataloaders.py run_experiments.py prepare_models.py prepare_network.py prepare_dataset.py perform_training.py prepare_miscellaneous.py prepare_acquisition_functions.py meta_single one_epoch retrieve_time_metric change_ground_truth_label condition_for_oracle obtain_aq_threshold obtain_entropy_threshold retrieve_acquisition_metric retrieve_gaussian_intersection obtain_output_probs perform_MC_sampling update_acquisition_dict select_sample_indices acquisition_function retrieve_entropy obtain_prediction retrieve_variance_ratio load_dataloaders_list_active load_inputs_and_outputs check_dataset_allignment load_initial_data my_dataset_direct determine_classification_setting obtain_predictions change_lr change_weight_decay obtain_loss_function load_initial_model check_mismatch_and_load_weights perturb_weights load_models_list cnn_network_image cnn_network_time train_model save_statistics make_dir make_saving_directory save_config_weights print_hyperparam_info run_configurations zero_grad numpy dataset roc_auc_score criterion_single append to double fit_transform range cat LabelBinarizer concatenate mean float type long isinstance backward tqdm step len meta_single entropy expit mean softmax sum expit where softmax item argmax len float64 log values retrieve_variance_ratio expit list transpose trace append expand_dims concatenate mean softmax zip det items entropy print inv dict dot cov array retrieve_entropy items list print trapz dict list retrieve_acquisition_metric set dict zip append keys fromiter list print min mean linspace append max enumerate values diff list obtain_aq_threshold print sample keys len expit softmax argmax item int entropy seed int list arange set uniform log ppf isinstance reshape retrieve_gaussian_intersection sqrt cdf array score_samples save obtain_prediction seed list exp expit retrieve_acquisition_metric Counter uniform select_sample_indices append retrieve_time_metric condition_for_oracle obtain_entropy_threshold obtain_output_probs mean softmax unique zip int items change_ground_truth_label entropy join print dict array len seed list zip concatenate print dict manual_seed append keys range one_epoch join str print check_dataset_allignment len print check_dataset_allignment print exit range len print param_groups print param_groups list arange zip label_array MSELoss dict histogram device tensor BCEWithLogitsLoss max CrossEntropyLoss len tensor max where print join list zip print ones load_inputs_and_outputs Adam dict device append values len print str copy_ items list zip items norm list copy_ shape Normal sample to save_statistics label_array perform_MC_sampling change_weight_decay where update_acquisition_dict save load_dataloaders_list_active argmax log expit exp load_models_list obtain_loss_function acquisition_function append expand_dims one_epoch concatenate load_inputs_and_outputs change_lr sqrt softmax zip float load_initial_data join add_scalar load_initial_model print reshape fit dict save_config_weights GaussianMixture len str join make_dir int int join chdir print join save join save list print dict zip tabulate train_model join list held_out_lr modalities leads arange determine_classification_setting print items batchsize range datasets make_saving_directory zip listdir print_hyperparam_info enumerate | # Active Learning of Cardiac Signals with SoQal SoQal is framework that allows a network to dynamically decide, upon acquiring an unlabelled data point, whether to request a label for that data point from an oracle or to pseudo-label it instead. It can reduce a network's dependence on an oracle (e.g., physician) while maintaining its strong predictive performance. This repository contains a PyTorch implementation of SoQal. For details, see **SoQal: Selective Oracle Questioning for Consistency Based Active Learning of Cardiac Signals**. [[ICML paper](https://arxiv.org/pdf/2004.09557.pdf)] [[blogpost](https://danikiyasseh.github.io/blogs/SoQal/)] [[video](https://icml.cc/virtual/2022/spotlight/15970)] # Requirements The SoQal code requires * Python 3.6 or higher * PyTorch 1.0 or higher # Datasets ## Download | 1,831 |
danishpruthi/Adversarial-Misspellings | ['sentiment analysis'] | ['Combating Adversarial Misspellings with Robust Word Recognition'] | defenses/scRNN/utils.py spell_checkers/utils.py biLstm.py defenses/scRNN/corrector.py defenses/scRNN/model.py biLstm_char_only.py data/classes/split.py log/__init__.py spell_checkers/ATD.py spell_checkers/atd_checker.py biLstm_with_chars.py spell_checkers/hunspell_checker.py defenses/scRNN/train.py main.py attacks.py all_one_attack add_one_attack get_keyboard_neighbors drop_one_attack_deprecated swap_one_attack get_random_attack random_all_one_attack drop_one_attack is_valid_attack key_one_attack BiLSTM BiLSTM BiLSTM read_dataset start_adversarial_training swap_a_char drop_a_char check_against_spell_mistakes get_word_and_char_indices get_qualitative_examples generate_ann decode_tag add_a_char normalize predict start_training key_a_char main read_valid_lines evaluate create_vocabulary add_more_examples get_confidence read_valid_lines ScRNNChecker ElmoRNN ElmoScRNN ScRNN main decode_line compute_WER iterate get_add_word_representation create_vocab get_swap_word_representation zero_vector convert_vocab_dicts get_boc_word_representation convert_vocab_dicts_bg get_keyboard_word_representation get_target_representation get_batched_input_data pad_target_sequence get_line_representation set_word_limit one_hot _get_random_char get_drop_word_representation get_lines _get_keyboard_neighbor load_vocab_dicts pad_input_sequence bag_of_chars pr_megenta pr pr_underline pr_white warn bcolors pr_cyan pr_red pr_bblue log pr_conceal pr_yellow pr_blue pr_header pr_black info pr_bwhite pr_bblack pr_byellow pr_blink pr_bred pr_bcyan error pr_bgreen pr_green pr_bmagenta Error checkDocument Metric setDefaultKey stats read_birkbeck ATDChecker test_birkbeck test_basic HunspellChecker read_birkbeck range len enumerate split enumerate split enumerate split get_keyboard_neighbors split append range enumerate len drop_one_attack swap_one_attack key_one_attack add_one_attack generator shuffle append defaultdict range len lower get_keyboard_neighbors is_valid_attack sum array range len print read_valid_lines zip key_a_char swap_a_char perturbation_fn drop_a_char add_a_char append exp max read_valid_lines split add append all_one_attack pr_red get_word_and_char_indices open str list add_one_attack strftime add swap_one_attack drop_one_attack correct_string range predict dump shuffle set zip key_one_attack read_valid_lines tqdm get_confidence read_valid_lines list tqdm pr_green zip get_word_and_char_indices predict len argmax calc_scores npvalue npvalue normalize argmax calc_scores join split randint range len join split randint range len join get_keyboard_neighbors split randint range len join split randint range len read_valid_lines update time argmax str backward print npvalue shuffle extend save add_more_examples calc_scores get_word_and_char_indices range pickneglogsoftmax len update argmax time str read_dataset backward npvalue print shuffle save calc_scores range pickneglogsoftmax read_valid_lines list all_one_attack pr shuffle tqdm pr_green pr_red zip correct_string get_word_and_char_indices predict read_valid_lines str list dump print strip shuffle dict open random_all_one_attack zip append get_word_and_char_indices range predict enumerate all_one_attack add_one_attack append swap_one_attack drop_one_attack key_one_attack get_word_and_char_indices predict CNN load RNN get_qualitative_examples generate_ann evaluate BiLSTM print build_model start_training start_adversarial_training create_vocabulary check_against_spell_mistakes len append split range len split range len model zero_grad numpy cuda sorted FloatTensor get_batched_input_data append range CrossEntropyLoss compute_WER LongTensor is_available decode_line type criterion model_bg backward print extend step ElmoScRNN create_vocab save cuda str CHAR_VOCAB ElmoRNN Adam CrossEntropyLoss ScRNN range get_lines iterate is_available join load_vocab_dicts time readlines open load items sorted list defaultdict dump str dict get_lines split append range open load convert_vocab_dicts convert_vocab_dicts_bg open defaultdict defaultdict append append get_target_representation pad_target_sequence pad_input_sequence split append get_line_representation max range len get_keyboard_word_representation get_add_word_representation get_drop_word_representation get_swap_word_representation append split one_hot bag_of_chars zero_vector randint len get_swap_word_representation randint random len get_swap_word_representation randint _get_random_char len get_swap_word_representation randint _get_keyboard_neighbor len append defaultdict range len print print print ENDC WARNING BOLD print ENDC FAIL BOLD print ENDC OKGREEN BOLD print ENDC UNDERLINE BOLD print ENDC OKBLUE BOLD print ENDC OKGREEN BOLD print OKBLACK ENDC BOLD print ENDC OKRED BOLD print ENDC OKYELLOW BOLD print ENDC OKMAGENTA BOLD print ENDC OKCYAN BOLD print ENDC OKWHITE BOLD print HEADER ENDC BOLD print ENDC BLINK BOLD print ENDC CONCEAL BOLD print BACKBLACK ENDC BOLD print ENDC BACKRED BOLD print ENDC BACKGREEN BOLD print ENDC BACKYELLOW BOLD print ENDC BACKBLUE BOLD print ENDC BACKMEGNETA BOLD print ENDC BACKCYAN BOLD print ENDC BACKWHITE BOLD getresponse read request fromstring close urlencode findall HTTPConnection getresponse read request fromstring close urlencode HTTPConnection print ATDChecker join correct_word print tqdm append read_birkbeck ATDChecker | ## Combating Adversarial Misspellings Code for the following paper. > Combating Adversarial Misspellings with Robust Word Recognition > > *Danish Pruthi, Bhuwan Dhingra and Zachary C. Lipton* > > The 57th Annual Meeting of the Association for Computational Linguistics (ACL-19) (To Appear). ### Requirements ``` nltk | 1,832 |
danmohaha/CVPRW2019_Face_Artifacts | ['face swapping'] | ['Exposing DeepFake Videos By Detecting Face Warping Artifacts'] | py_utils/face_utils/__init__.py demo.py resolution_network.py tf_utils/__init__.py py_utils/face_utils/dlib_utils.py py_utils/vid_utils/proc_aud.py tf_utils/utils.py py_utils/plot_utils/__init__.py py_utils/vis.py py_utils/plot_utils/plot.py py_utils/face_utils/lib.py solver.py py_utils/img_utils/proc_img.py py_utils/face_utils/umeyama.py py_utils/plot.py py_utils/vid_utils/proc_vid.py run im_test ResoNet Solver vis_plot draw2D draw_face_rect draw_face_landmarks vis_im get_landmarks_predictor get_front_face_detector get_all_face_mask align crop_eye get_aligned_face_and_landmarks get_face_loc correct_colours draw_convex_hull get_face_mask get_2d_aligned_face cut_head bur_size get_face_mask_v2 shape_to_np umeyama draw_words2im aug random_transform get_mask resize draw_barchart draw2D draw_heatmap audio_transfer get_video_frame_nums get_video_dims parse_vid_into_imgs crop_video resize_video gen_vid_from_folder gen_vid parse_vid get_video_fps mean_value format align resize tuple test mean warning vis_im info cut_head range append len join basicConfig str im_test print close mean info append imread listdir parse_vid enumerate show plot xlabel grid draw ylabel close ylim title figure legend _renderer xlim array enumerate arange float32 draw2D resize len imwrite concatenate append range len rectangle circle convexHull fillConvexPoly zeros transpose draw_convex_hull concatenate transpose inv dot draw_convex_hull int32 append zeros expand_dims array range mean int norm mean int norm GaussianBlur zeros range seed int minimum min maximum append randint max enumerate append uint8 face_detector enumerate zeros uint8 uint8 lmark_predictor append face_detector shape_to_np concatenate transpose dot append get_2d_aligned_face minimum int min maximum append max range len svd T ones matrix_rank mean dot eye sum diag fromarray Contrast random_transform Brightness copy uniform Sharpness array Color enhance enumerate warpAffine random copy getRotationMatrix2D uniform enumerate putText LINE_AA FONT_HERSHEY_SIMPLEX zeros tuple enumerate update update xlabel grid draw ylabel close title ylim bar figure legend xlim xticks array enumerate savefig clf heatmap write_videofile VideoFileClip audio set_audio int32 resize gen_vid parse_vid enumerate get VideoCapture CAP_PROP_FRAME_HEIGHT int32 CAP_PROP_FRAME_WIDTH release get VideoCapture int CAP_PROP_FRAME_COUNT release get VideoCapture CAP_PROP_FPS release get VideoCapture int read CAP_PROP_FRAME_HEIGHT int32 append CAP_PROP_FPS CAP_PROP_FRAME_COUNT CAP_PROP_FRAME_WIDTH release len format imwrite print parse_vid enumerate uint8 replace print write VideoWriter VideoWriter_fourcc release sorted listdir gen_vid gen_vid enumerate parse_vid resize concat split | danmohaha/CVPRW2019_Face_Artifacts | 1,833 |
danmohaha/WIFS2018_In_Ictu_Oculi | ['face swapping'] | ['In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking'] | __init__.py solu_base.py proc_data/__init__.py train_blink_cnn.py py_utils/img_utils/proc_img.py py_utils/metric_utils/metric.py deep_base/vgg16.py blink_net.py py_utils/plot_utils/__init__.py proc_data/seq_data.py toy/run_lrcn.py solver.py py_utils/vid_utils/__init__.py py_utils/x_utils.py deep_base/ops.py py_utils/face_utils/__init__.py toy/run_cnn.py py_utils/face_utils/dlib_utils.py train_blink_lrcn.py py_utils/plot_utils/plot.py py_utils/face_utils/lib.py py_utils/vid_utils/proc_vid.py py_utils/__init__.py deep_base/__init__.py py_utils/face_utils/umeyama.py proc_data/eye_data.py py_utils/vis_utils/vis.py py_utils/img_utils/__init__.py BlinkLRCN BlinkCNN Solu Solver conv2D batch_norm use_bias_helper fully_connected get_restore_var_list list_vars_in_ckpt max_pool avg_pool activate get_vgg16_pool5 get_prob get_vgg16_conv5 EyeData SeqData pad_to_max_len get_landmarks_predictor get_front_face_detector align get_aligned_face_and_landmarks get_face_loc draw_convex_hull get_2d_aligned_face cut_head bur_size crop_eye shape_to_np umeyama draw_words2im aug random_transform get_mask resize AverageMeter accuracy draw_barchart draw2D draw_heatmap get_video_frame_nums get_video_dims parse_vid_into_imgs crop_video resize_video gen_vid_from_folder gen_vid parse_vid get_video_fps vis_seq vis_eye vis_im main main use_bias_helper use_bias_helper format print get_collection GLOBAL_VARIABLES list_vars_in_ckpt get_collection GLOBAL_VARIABLES list_variables conv2D activate edict max_pool max_pool get_vgg16_conv5 dropout fully_connected get_vgg16_pool5 fc8 fc7 fc6 softmax activate append range len convexHull fillConvexPoly mean int norm zeros range minimum int min maximum append randint max enumerate append uint8 face_detector enumerate uint8 lmark_predictor append face_detector shape_to_np concatenate transpose dot append get_2d_aligned_face minimum int min maximum append max range len svd T ones matrix_rank mean dot eye sum diag fromarray Contrast random_transform Brightness copy uniform Sharpness array Color enhance enumerate warpAffine random copy getRotationMatrix2D uniform enumerate putText LINE_AA FONT_HERSHEY_SIMPLEX zeros tuple enumerate zeros_like shape append sum max update plot xlabel grid draw ylabel close title ylim figure legend xlim enumerate update xlabel grid draw ylabel close title ylim bar figure legend xlim xticks array enumerate savefig clf heatmap int32 resize gen_vid parse_vid enumerate get VideoCapture CAP_PROP_FRAME_HEIGHT int32 CAP_PROP_FRAME_WIDTH release get VideoCapture int CAP_PROP_FRAME_COUNT release get VideoCapture CAP_PROP_FPS release get VideoCapture int read CAP_PROP_FRAME_HEIGHT int32 append CAP_PROP_FPS CAP_PROP_FRAME_COUNT CAP_PROP_FRAME_WIDTH release len format imwrite print parse_vid enumerate uint8 replace print write VideoWriter VideoWriter_fourcc release sorted listdir gen_vid gen_vid enumerate parse_vid resize drawContours circle astype convexHull str imwrite concatenate range len str imwrite range len str Session print close test build BlinkCNN Solu get_eye_by_fid push_eye_prob range frame_num init gen_videos reset_default_graph plot_by_fid Solver arange resize append BlinkLRCN minimum max_time pad_to_max_len | ## In Ictu Oculi: Exposing AI Created Fake Videos by Detecting Eye Blinking Yuezun Li, Ming-ching Chang and Siwei Lyu \ University at Albany, State University of New York, USA \ IEEE International Workshop on Information Forensics and Security (WIFS), 2018 \ [https://arxiv.org/abs/1806.02877](https://arxiv.org/abs/1806.02877) ### Contents 1. [Requirement](#Requirement) 2. [Usage](#Usage) 3. [Train](#Train) ### Requirement | 1,834 |
danxuhk/CMT-CNN | ['pedestrian detection'] | ['Learning Cross-Modal Deep Representations for Robust Pedestrian Detection'] | python/caffe/io.py python/caffe/test/test_python_layer.py scripts/download_model_binary.py python/caffe/test/test_net.py tools/extra/resize_and_crop_images.py python/draw_net.py src/caffe/test/test_data/generate_sample_data.py python/caffe/draw.py python/caffe/pycaffe.py tools/extra/extract_seconds.py scripts/cpp_lint.py python/classify.py examples/web_demo/exifutil.py python/caffe/test/test_solver.py python/caffe/classifier.py examples/finetune_flickr_style/assemble_data.py tools/extra/parse_log.py python/caffe/__init__.py examples/web_demo/app.py scripts/copy_notebook.py python/caffe/detector.py python/detect.py download_image start_tornado start_from_terminal embed_image_html classify_upload index allowed_file ImagenetClassifier classify_url open_oriented_im apply_orientation main main main parse_args Classifier Detector get_edge_label draw_net get_layer_label get_pydot_graph choose_color_by_layertype get_pooling_types_dict draw_net_to_file Transformer blobproto_to_array datum_to_array array_to_blobproto arraylist_to_blobprotovecor_str array_to_datum resize_image blobprotovector_str_to_arraylist load_image oversample _Net_blobs _Net_forward_all _Net_set_input_arrays _Net_backward _Net_params _Net_forward _Net_outputs _Net_forward_backward_all _Net_batch _Net_inputs simple_net_file TestNet python_net_file SimpleLayer TestPythonLayer TestSolver ParseNolintSuppressions CheckVlogArguments CheckSectionSpacing FindNextMultiLineCommentEnd ReplaceAll CheckForFunctionLengths _SetOutputFormat _IsTestFilename _VerboseLevel CheckBraces RemoveMultiLineComments ResetNolintSuppressions CheckForNonStandardConstructs _SetVerboseLevel PrintUsage _NestingState CheckIncludeLine CheckAccess _CppLintState Search CheckInvalidIncrement RemoveMultiLineCommentsFromRange CleansedLines CheckForBadCharacters UpdateIncludeState FindPreviousMatchingAngleBracket CheckEmptyBlockBody FindNextMultiLineCommentStart Match _NamespaceInfo CheckMakePairUsesDeduction CheckCheck IsBlankLine _SetFilters ProcessLine _FunctionState CheckPosixThreading GetLineWidth GetHeaderGuardCPPVariable IsCppString _IncludeState CheckSpacing _ClassInfo CheckForCopyright IsErrorSuppressedByNolint ProcessFileData CheckForMultilineCommentsAndStrings CloseExpression _PreprocessorInfo _OutputFormat CheckForIncludeWhatYouUse CheckSpacingForFunctionCall FindEndOfExpressionInLine FindNextMatchingAngleBracket _SetCountingStyle ProcessFile _IncludeError CleanseRawStrings CheckAltTokens CheckForNewlineAtEOF ParseArguments CheckForNonConstReference PrintCategories _Filters main FilesBelongToSameModule CheckCStyleCast FileInfo _BlockInfo CheckForHeaderGuard CheckCaffeDataLayerSetUp ReverseCloseExpression CleanseComments _DropCommonSuffixes _ClassifyInclude CheckStyle CheckCaffeAlternatives FindStartOfExpressionInLine _ShouldPrintError CheckComment Error _GetTextInside CheckLanguage CheckCaffeRandom GetPreviousNonBlankLine reporthook parse_readme_frontmatter model_checks_out valid_dirname get_start_time extract_seconds extract_datetime_from_line get_log_created_year write_csv parse_log fix_initial_nan_learning_rate save_csv_files main parse_args parse_line_for_net_output ResizeCropImagesMapper PILResizeCrop OpenCVResizeCrop urlretrieve get read info load_image classify_image StringIO join replace info secure_filename save filename open_oriented_im classify_image fromarray replace astype save resize StringIO items list listen HTTPServer format print start WSGIContainer update start_tornado add_option OptionParser debug port parse_args ImagenetClassifier forward run hasattr _getexif astype float32 tile apply_orientation open transpose model_def endswith ArgumentParser save mean_file channel_swap output_file dirname expanduser parse_args input_file predict Classifier set_mode_cpu load time isdir print add_argument set_mode_gpu pretrained_model gpu len DataFrame Detector format to_hdf detect_selective_search mean set_index to_csv detect_windows read_csv add_argument ArgumentParser read NetParameter output_image_file rankdir Merge draw_net_to_file items list DESCRIPTOR batch_size str num_output get_pooling_types_dict add_edge get_edge_label list Dot get_layer_label values name choose_color_by_layertype Edge Node bottom append type layer add_node top shape BlobProto extend flat extend BlobProtoVector ParseFromString BlobProtoVector extend tostring shape Datum flat data len astype float32 tile zoom tuple resize fill empty array concatenate shape tile empty array items list layers index set outputs _forward len items list _backward layers inputs index set len items list asarray extend copy next _batch iter forward values len items list asarray backward extend next _batch zip_longest zip iter forward values len ascontiguousarray list concatenate iter num zeros next range values len NamedTemporaryFile str close write error search add group clear compile compile compile SetOutputFormat SetCountingStyle SetFilters _Filters startswith IsErrorSuppressedByNolint _ShouldPrintError write IncrementErrorCount replace append Match group find startswith endswith range error FindNextMultiLineCommentEnd RemoveMultiLineCommentsFromRange FindNextMultiLineCommentStart rstrip find range len FindEndOfExpressionInLine range len FindStartOfExpressionInLine error min search I range len FileInfo RepositoryName sep sub ParseNolintSuppressions error startswith split GetHeaderGuardCPPVariable enumerate error enumerate error len error replace count error find error find error find error find error Search error match InnermostClass replace error escape Match Search error group Search Check error lines Count End group Begin NumLines Match raw_lines range Search error match group error Match group pop group append Search pop group append Search elided replace CheckSpacingForFunctionCall rfind error len group min CloseExpression NumLines sub find CheckComment Match range Search lines_without_raw_strings error group starting_linenum Match range Search error rfind len group ReverseCloseExpression Search Match CloseExpression find error Match CloseExpression find elided error strip group FindEndOfExpressionInLine find Match range CloseExpression len error Match finditer normalize isinstance GetLineWidth int InnermostClass CheckCheck error CheckAltTokens CheckBraces CheckSpacing CheckSectionSpacing CheckEmptyBlockBody CheckAccess GetHeaderGuardCPPVariable lines_without_raw_strings _DropCommonSuffixes RepositoryName match split CheckNextIncludeOrder CanonicalizeAlphabeticalOrder FileInfo error search group SetLastHeader match _ClassifyInclude Match pop end search set append values M rstrip replace CheckCStyleCast error _GetTextInside CheckIncludeLine search group lstrip startswith Match ResetSection Search split rfind error group ReverseCloseExpression lstrip findall Match range Search ReplaceAll error Match Search endswith replace setdefault group search CleanseComments open list FilesBelongToSameModule error search copy sub NumLines FullName keys range error search CheckPosixThreading ParseNolintSuppressions CheckVlogArguments CheckMakePairUsesDeduction CheckCaffeDataLayerSetUp CheckLanguage CheckInvalidIncrement CheckCaffeRandom CheckForNonConstReference check_fn Update CheckForNonStandardConstructs CheckStyle raw_lines CheckForMultilineCommentsAndStrings CheckCaffeAlternatives CheckForFunctionLengths CleansedLines _NestingState CheckForBadCharacters CheckForNewlineAtEOF _IncludeState RemoveMultiLineComments CheckForCopyright ResetNolintSuppressions CheckForHeaderGuard NumLines CheckCompletedBlocks CheckForIncludeWhatYouUse range ProcessLine _FunctionState Error rstrip endswith len write ProcessFileData _SetVerboseLevel range split write exit join write exit _VerboseLevel int getopt _SetOutputFormat set _SetVerboseLevel PrintCategories _SetFilters _OutputFormat PrintUsage _SetCountingStyle split getreader ParseArguments ResetErrorCounts stderr exit verbose_level PrintErrorCounts StreamReaderWriter ProcessFile getwriter int time write flush load join index int rfind datetime split getctime year strip extract_datetime_from_line get_start_time total_seconds strip write get_log_created_year close extract_datetime_from_line open float get_log_created_year compile fix_initial_nan_learning_rate search group OrderedDict append float join basename write_csv print excel parse_log save_csv_files output_dir logfile_path | # Learning Cross-Modal Deep Representations for Robust Pedestrian Detection By Dan Xu, Wanli Ouyang, Elisa Ricci, Xiaogang Wang and Nicu Sebe <p align="center"> <img src="Teaser.jpg" width="600"/> </p> ## Introduction CMT-CNN is a pedestrian detection approach asscoiated to an arxiv submission https://arxiv.org/abs/1704.02431 which is accepted at CVPR 2017. The code is implemented with Caffe and has been tested under the configurations of Ubuntu 14.04, MATLAB 2015b and CUDA 8.0. ## Cite CMT-CNN Please consider citing our paper if the code is helpful in your research work: <pre>@inproceedings{xu2017learning, | 1,835 |
danxuhk/ContinuousCRF-CNN | ['depth estimation', 'monocular depth estimation'] | ['Monocular Depth Estimation using Multi-Scale Continuous CRFs as Sequential Deep Networks', 'Multi-Scale Continuous CRFs as Sequential Deep Networks for Monocular Depth Estimation'] | python/caffe/io.py python/caffe/test/test_nccl.py python/train.py python/caffe/test/test_python_layer.py scripts/download_model_binary.py python/DataTransformer_seg.py python/MultilabelDataLayer.py python/caffe/net_spec.py python/caffe/coord_map.py python/MultilabelDataLayer_bak.py python/caffe/test/test_net.py tools/extra/resize_and_crop_images.py python/draw_net.py python/caffe/test/test_draw.py python/caffe/test/test_net_spec.py src/caffe/test/test_data/generate_sample_data.py python/DataTransformer.py python/caffe/DataTransformer.py python/caffe/draw.py python/caffe/pycaffe.py tools/extra/extract_seconds.py scripts/cpp_lint.py python/classify.py examples/web_demo/exifutil.py examples/pycaffe/layers/pyloss.py python/caffe/test/test_solver.py python/caffe/classifier.py examples/pycaffe/layers/pascal_multilabel_datalayers.py examples/finetune_flickr_style/assemble_data.py python/caffe/test/test_io.py python/caffe/test/test_python_layer_with_param_str.py examples/pycaffe/tools.py tools/extra/parse_log.py scripts/split_caffe_proto.py python/caffe/__init__.py python/caffe/test/test_layer_type_list.py examples/web_demo/app.py python/caffe/MultilabelDataLayer.py scripts/copy_notebook.py python/MultilabelDataLayer_seg.py python/caffe/detector.py python/DataTransformer_bak.py python/detect.py examples/CCRF_CNN_DepthEstimation/test_NYU2.py examples/pycaffe/caffenet.py python/caffe/test/test_coord_map.py tools/extra/summarize.py download_image make_net max_pool caffenet conv_relu fc_relu CaffeSolver SimpleTransformer print_info check_params PascalMultilabelDataLayerSync load_pascal_annotation BatchLoader EuclideanLossLayer start_tornado start_from_terminal embed_image_html classify_upload index allowed_file ImagenetClassifier classify_url open_oriented_im apply_orientation main DataTransformer DataTransformer DataTransformer main main parse_args print_info check_params BatchLoader Multilabel_Data_Layer print_info check_params BatchLoader Multilabel_Data_Layer print_info check_params BatchLoader Multilabel_Data_Layer train solve time Classifier coord_map UndefinedMapException conv_params coord_map_from_to AxisMismatchException inverse crop_params compose crop DataTransformer Detector get_edge_label draw_net get_layer_label get_pydot_graph choose_color_by_layertype get_pooling_types_dict draw_net_to_file Transformer blobproto_to_array datum_to_array array_to_blobproto array_to_datum resize_image arraylist_to_blobprotovector_str blobprotovector_str_to_arraylist load_image oversample print_info check_params BatchLoader Multilabel_Data_Layer Layers Function Parameters Top NetSpec assign_proto param_name_dict to_proto _Net_blobs _Net_forward_all _Net_set_input_arrays _Net_backward _Net_params _Net_forward _Net_outputs _Net_forward_backward_all _Net_blob_loss_weights _Net_batch _Net_get_id_name _Net_inputs _Net_layer_dict TestCoordMap coord_net_spec getFilenames TestDraw TestBlobProtoToArray TestArrayToDatum TestLayerTypeList TestNCCL TestLevels TestStages simple_net_file TestNet TestAllInOne lenet TestNetSpec silent_net anon_lenet exception_net_file parameter_net_file SimpleLayer phase_net_file TestPythonLayer ParameterLayer PhaseLayer python_net_file ExceptionLayer SimpleParamLayer TestLayerWithParam python_param_net_file TestSolver ParseNolintSuppressions CheckVlogArguments CheckSectionSpacing FindNextMultiLineCommentEnd ReplaceAll CheckForFunctionLengths _SetOutputFormat _IsTestFilename _VerboseLevel CheckBraces RemoveMultiLineComments ResetNolintSuppressions CheckForNonStandardConstructs _SetVerboseLevel PrintUsage _NestingState CheckIncludeLine CheckAccess _CppLintState Search CheckInvalidIncrement RemoveMultiLineCommentsFromRange CleansedLines CheckForBadCharacters UpdateIncludeState FindPreviousMatchingAngleBracket CheckEmptyBlockBody FindNextMultiLineCommentStart Match _NamespaceInfo CheckMakePairUsesDeduction CheckCheck IsBlankLine _SetFilters ProcessLine _FunctionState CheckPosixThreading GetLineWidth GetHeaderGuardCPPVariable IsCppString _IncludeState CheckSpacing _ClassInfo CheckForCopyright IsErrorSuppressedByNolint ProcessFileData CheckForMultilineCommentsAndStrings CloseExpression _PreprocessorInfo _OutputFormat CheckForIncludeWhatYouUse CheckSpacingForFunctionCall FindEndOfExpressionInLine FindNextMatchingAngleBracket _SetCountingStyle ProcessFile _IncludeError CleanseRawStrings CheckAltTokens CheckForNewlineAtEOF ParseArguments CheckForNonConstReference PrintCategories _Filters main FilesBelongToSameModule CheckCStyleCast FileInfo _BlockInfo CheckForHeaderGuard CheckCaffeDataLayerSetUp ReverseCloseExpression CleanseComments _DropCommonSuffixes _ClassifyInclude CheckStyle CheckCaffeAlternatives FindStartOfExpressionInLine _ShouldPrintError CheckComment Error _GetTextInside CheckLanguage CheckCaffeRandom GetPreviousNonBlankLine reporthook parse_readme_frontmatter model_checks_out valid_dirname get_start_time extract_seconds extract_datetime_from_line get_log_created_year write_csv parse_log fix_initial_nan_learning_rate save_csv_files main parse_args parse_line_for_net_output ResizeCropImagesMapper PILResizeCrop OpenCVResizeCrop print_table printed_len summarize_net main read_net format_param download_image make_net max_pool caffenet conv_relu fc_relu CaffeSolver SimpleTransformer print_info check_params PascalMultilabelDataLayerSync load_pascal_annotation BatchLoader EuclideanLossLayer start_tornado start_from_terminal embed_image_html classify_upload index allowed_file ImagenetClassifier classify_url open_oriented_im apply_orientation main DataTransformer main parse_args print_info check_params BatchLoader Multilabel_Data_Layer train solve time Classifier coord_map UndefinedMapException conv_params coord_map_from_to AxisMismatchException inverse crop_params compose crop DataTransformer Detector get_edge_label draw_net get_layer_label get_pydot_graph choose_color_by_layertype get_pooling_types_dict draw_net_to_file Transformer blobproto_to_array datum_to_array array_to_blobproto array_to_datum resize_image arraylist_to_blobprotovector_str blobprotovector_str_to_arraylist load_image oversample print_info check_params BatchLoader Multilabel_Data_Layer Layers Function Parameters Top NetSpec assign_proto param_name_dict to_proto _Net_blobs _Net_forward_all _Net_set_input_arrays _Net_backward _Net_params _Net_forward _Net_outputs _Net_forward_backward_all _Net_blob_loss_weights _Net_batch _Net_get_id_name _Net_inputs _Net_layer_dict TestCoordMap coord_net_spec getFilenames TestDraw TestBlobProtoToArray TestArrayToDatum TestLayerTypeList TestNCCL TestLevels TestStages simple_net_file TestNet TestAllInOne lenet TestNetSpec silent_net anon_lenet exception_net_file parameter_net_file SimpleLayer phase_net_file TestPythonLayer ParameterLayer PhaseLayer python_net_file ExceptionLayer SimpleParamLayer TestLayerWithParam python_param_net_file TestSolver ParseNolintSuppressions CheckVlogArguments CheckSectionSpacing FindNextMultiLineCommentEnd ReplaceAll CheckForFunctionLengths _SetOutputFormat _IsTestFilename _VerboseLevel CheckBraces RemoveMultiLineComments ResetNolintSuppressions CheckForNonStandardConstructs _SetVerboseLevel PrintUsage _NestingState CheckIncludeLine CheckAccess _CppLintState Search CheckInvalidIncrement RemoveMultiLineCommentsFromRange CleansedLines CheckForBadCharacters UpdateIncludeState FindPreviousMatchingAngleBracket CheckEmptyBlockBody FindNextMultiLineCommentStart Match _NamespaceInfo CheckMakePairUsesDeduction CheckCheck IsBlankLine _SetFilters ProcessLine _FunctionState CheckPosixThreading GetLineWidth GetHeaderGuardCPPVariable IsCppString _IncludeState CheckSpacing _ClassInfo CheckForCopyright IsErrorSuppressedByNolint ProcessFileData CheckForMultilineCommentsAndStrings CloseExpression _PreprocessorInfo _OutputFormat CheckForIncludeWhatYouUse CheckSpacingForFunctionCall FindEndOfExpressionInLine FindNextMatchingAngleBracket _SetCountingStyle ProcessFile _IncludeError CleanseRawStrings CheckAltTokens CheckForNewlineAtEOF ParseArguments CheckForNonConstReference PrintCategories _Filters main FilesBelongToSameModule CheckCStyleCast FileInfo _BlockInfo CheckForHeaderGuard CheckCaffeDataLayerSetUp ReverseCloseExpression CleanseComments _DropCommonSuffixes _ClassifyInclude CheckStyle CheckCaffeAlternatives FindStartOfExpressionInLine _ShouldPrintError CheckComment Error _GetTextInside CheckLanguage CheckCaffeRandom GetPreviousNonBlankLine reporthook parse_readme_frontmatter model_checks_out valid_dirname get_start_time extract_seconds extract_datetime_from_line get_log_created_year write_csv parse_log fix_initial_nan_learning_rate save_csv_files main parse_args parse_line_for_net_output ResizeCropImagesMapper PILResizeCrop OpenCVResizeCrop print_table printed_len summarize_net main read_net format_param imread urlretrieve Convolution InnerProduct Data SoftmaxWithLoss LRN Accuracy max_pool InnerProduct conv_relu fc_relu Dropout join list getElementsByTagName get_data_from_tag csr_matrix dict zip zeros float range enumerate len print format get read info load_image classify_image StringIO join replace info secure_filename save filename open_oriented_im classify_image fromarray replace astype save resize StringIO items list listen HTTPServer format print start WSGIContainer update start_tornado add_option OptionParser debug port parse_args ImagenetClassifier forward run hasattr _getexif astype float32 tile apply_orientation open transpose model_def endswith ArgumentParser save mean_file channel_swap output_file dirname expanduser parse_args input_file predict Classifier set_mode_cpu load time isdir print add_argument set_mode_gpu pretrained_model gpu len DataFrame Detector format to_hdf detect_selective_search mean set_index to_csv detect_windows read_csv add_argument ArgumentParser read NetParameter output_image_file rankdir Merge TRAIN draw_net_to_file TEST Process str join init_log start append new_uid range log len before_backward layers display add_callback after_backward after_forward Timer append before_forward range len max_iter restore time set_solver_count set_solver_rank add_callback set_device set_multiprocess SGDSolver after_backward set_mode_gpu layer_wise_reduce step bcast NCCL len get params array get params array crop_params conv_params pop collect_bottoms add fn coord_map compose coord_map_from_to items list DESCRIPTOR batch_size str num_output get_pooling_types_dict add_edge get_edge_label list Dot exclude get_layer_label add_node values choose_color_by_layertype Edge Node bottom append type layer include top data array diff shape BlobProto extend flat extend BlobProtoVector ParseFromString BlobProtoVector extend tostring shape Datum flat data len astype float32 tile zoom tuple resize fill empty array concatenate shape tile empty array LayerParameter list NetParameter _to_proto extend Counter OrderedDict values iteritems hasattr isinstance extend add getattr setattr list OrderedDict _blobs _blob_names zip list _blob_loss_weights OrderedDict _blob_names zip _layer_names list layers OrderedDict zip OrderedDict list keys list keys iteritems layers index set outputs _forward len iteritems _backward layers inputs index set len iteritems asarray extend copy next _batch itervalues forward len iteritems asarray backward extend copy next _batch itervalues zip_longest zip forward len ascontiguousarray concatenate itervalues zeros next range len data Pooling pool Convolution NetSpec Deconvolution conv Input join walk dirname abspath NamedTemporaryFile str close write data Pooling pool1 conv2 pool2 ip1 relu1 SoftmaxWithLoss Convolution NetSpec DummyData ip2 ReLU InnerProduct label conv1 Pooling SoftmaxWithLoss Convolution DummyData ReLU InnerProduct data NetSpec DummyData Silence data2 error search add group clear compile compile compile SetOutputFormat SetCountingStyle SetFilters _Filters startswith IsErrorSuppressedByNolint _ShouldPrintError write IncrementErrorCount replace append Match group find startswith endswith range error FindNextMultiLineCommentEnd RemoveMultiLineCommentsFromRange FindNextMultiLineCommentStart rstrip find range len FindEndOfExpressionInLine range len FindStartOfExpressionInLine error min search I range len FileInfo RepositoryName sep sub ParseNolintSuppressions error startswith split GetHeaderGuardCPPVariable enumerate error enumerate error len error replace count error find error find error find error find error Search error match InnermostClass replace error escape Match Search error group Search Check error lines Count End group Begin NumLines Match raw_lines range Search error match group error Match group pop group append Search pop group append Search elided replace CheckSpacingForFunctionCall rfind error len group min CloseExpression NumLines sub find CheckComment Match range Search lines_without_raw_strings error group starting_linenum Match range Search error rfind len group ReverseCloseExpression Search Match CloseExpression find error Match CloseExpression find elided error strip group FindEndOfExpressionInLine find Match range CloseExpression len error Match finditer normalize isinstance PY2 GetLineWidth int InnermostClass CheckCheck error CheckAltTokens CheckBraces CheckSpacing CheckSectionSpacing CheckEmptyBlockBody CheckAccess GetHeaderGuardCPPVariable lines_without_raw_strings _DropCommonSuffixes RepositoryName match split CheckNextIncludeOrder CanonicalizeAlphabeticalOrder FileInfo error search group SetLastHeader match _ClassifyInclude Match pop end search set itervalues append M rstrip replace CheckCStyleCast error _GetTextInside CheckIncludeLine search group lstrip startswith Match ResetSection Search split rfind error group ReverseCloseExpression lstrip findall Match range Search ReplaceAll error Match Search endswith replace setdefault group search CleanseComments open list FilesBelongToSameModule error search copy sub NumLines FullName keys range error search CheckPosixThreading ParseNolintSuppressions CheckVlogArguments CheckMakePairUsesDeduction CheckCaffeDataLayerSetUp CheckLanguage CheckInvalidIncrement CheckCaffeRandom CheckForNonConstReference check_fn Update CheckForNonStandardConstructs CheckStyle raw_lines CheckForMultilineCommentsAndStrings CheckCaffeAlternatives CheckForFunctionLengths CleansedLines _NestingState CheckForBadCharacters CheckForNewlineAtEOF _IncludeState RemoveMultiLineComments CheckForCopyright ResetNolintSuppressions CheckForHeaderGuard NumLines CheckCompletedBlocks CheckForIncludeWhatYouUse range ProcessLine _FunctionState Error rstrip endswith len write ProcessFileData _SetVerboseLevel range split write exit join write exit _VerboseLevel int getopt _SetOutputFormat set _SetVerboseLevel PrintCategories _SetFilters _OutputFormat PrintUsage _SetCountingStyle split getreader ParseArguments ResetErrorCounts stderr exit verbose_level PrintErrorCounts StreamReaderWriter ProcessFile getwriter PY2 int time write flush load join index int rfind datetime split getctime year strip extract_datetime_from_line get_start_time total_seconds strip write get_log_created_year close extract_datetime_from_line open float get_log_created_year compile fix_initial_nan_learning_rate search group OrderedDict append float join basename write_csv print excel parse_log save_csv_files output_dir logfile_path NetParameter decay_mult format name lr_mult append print zip len get join str format convolution_param list setdefault param kernel_size map set top bottom append type module layer enumerate print_table filename summarize_net read_net | # Multi-Scale Continuous CRFs as Sequential Deep Networks for Monocular Depth Estimation By Dan Xu, Elisa Ricci, Wanli Ouyang, Xiaogang Wang and Nicu Sebe <p align="center"> <img src="examples/images/framework.jpg" width="800"/> </p> ## Introduction CCRF-CNN is a continuous CRFs model implemented with neural networks for structured fusion of multi-scale predictions which is applied in monocular depth estimation and was accepted at CVPR 2017. </br> The currently published version implements a multi-scale cascade continuous CRF model. The model is implemented as a CNN layer and can be also applicable to other continuous regression problems. </br> The code is implemented under Caffe and has been tested under the configurations of Ubuntu 14.04 and CUDA 8.0.</br> Links: [<a href='https://arxiv.org/pdf/1704.02157.pdf'>CVPR Paper</a>][<a href='https://arxiv.org/abs/1803.00891'>TPAMI Paper</a>][<a href='https://youtu.be/4mdqh6YGhgE'>Oral Presentation</a>] | 1,836 |
danxuhk/StructuredAttentionDepthEstimation | ['depth estimation', 'monocular depth estimation'] | ['Structured Attention Guided Convolutional Neural Fields for Monocular Depth Estimation'] | python/caffe/io.py python/caffe/test/test_nccl.py python/train.py python/caffe/test/test_python_layer.py scripts/download_model_binary.py python/caffe/net_spec.py python/caffe/coord_map.py StructuredAttentionDepthEstimation/utils/fill_depth_hole.py python/caffe/test/test_net.py tools/extra/resize_and_crop_images.py python/draw_net.py python/caffe/test/test_draw.py python/caffe/test/test_net_spec.py src/caffe/test/test_data/generate_sample_data.py python/DataTransformer.py python/Pixel_Data_Layer.py python/caffe/draw.py python/caffe/pycaffe.py StructuredAttentionDepthEstimation/train.py tools/extra/extract_seconds.py scripts/cpp_lint.py python/classify.py examples/web_demo/exifutil.py examples/pycaffe/layers/pyloss.py python/caffe/test/test_solver.py StructuredAttentionDepthEstimation/utils/evaluation_depth.py python/caffe/classifier.py examples/pycaffe/layers/pascal_multilabel_datalayers.py examples/finetune_flickr_style/assemble_data.py StructuredAttentionDepthEstimation/data/save_16bitpng_gt.py StructuredAttentionDepthEstimation/utils/evaluation_utils.py python/caffe/test/test_io.py StructuredAttentionDepthEstimation/prototxt/gen_deploy_prototxt.py python/caffe/test/test_python_layer_with_param_str.py examples/pycaffe/tools.py tools/extra/parse_log.py scripts/split_caffe_proto.py python/caffe/__init__.py StructuredAttentionDepthEstimation/prototxt/gen_train_prototxt.py python/caffe/test/test_layer_type_list.py examples/web_demo/app.py scripts/copy_notebook.py StructuredAttentionDepthEstimation/test_kitti_depth.py python/caffe/detector.py python/detect.py examples/pycaffe/caffenet.py python/caffe/test/test_coord_map.py tools/extra/summarize.py download_image make_net max_pool caffenet conv_relu fc_relu CaffeSolver SimpleTransformer print_info check_params PascalMultilabelDataLayerSync load_pascal_annotation BatchLoader EuclideanLossLayer start_tornado start_from_terminal embed_image_html classify_upload index allowed_file ImagenetClassifier classify_url open_oriented_im apply_orientation main DataTransformer main main parse_args print_info check_params BatchLoader PixelDataLayer train solve time Classifier coord_map UndefinedMapException conv_params coord_map_from_to AxisMismatchException inverse crop_params compose crop Detector get_edge_label draw_net get_layer_lr_mult get_layer_label get_pooling_types_dict choose_color_by_layertype get_pydot_graph draw_net_to_file Transformer blobproto_to_array datum_to_array array_to_blobproto array_to_datum resize_image arraylist_to_blobprotovector_str blobprotovector_str_to_arraylist load_image oversample Layers Function Parameters Top NetSpec assign_proto param_name_dict to_proto _Net_blobs _Net_forward_all _Net_set_input_arrays _Net_backward _Net_params _Net_forward _Net_outputs _Net_forward_backward_all _Net_blob_loss_weights _Net_batch _Net_get_id_name _Net_inputs _Net_layer_dict TestCoordMap coord_net_spec getFilenames TestDraw TestBlobProtoToArray TestArrayToDatum TestLayerTypeList TestNCCL TestLevels TestStages simple_net_file TestNet TestAllInOne lenet TestNetSpec silent_net anon_lenet exception_net_file parameter_net_file SimpleLayer phase_net_file TestPythonLayer ParameterLayer PhaseLayer python_net_file ExceptionLayer SimpleParamLayer TestLayerWithParam python_param_net_file TestSolver ParseNolintSuppressions CheckVlogArguments CheckSectionSpacing FindNextMultiLineCommentEnd ReplaceAll CheckForFunctionLengths _SetOutputFormat _IsTestFilename _VerboseLevel CheckBraces RemoveMultiLineComments ResetNolintSuppressions CheckForNonStandardConstructs _SetVerboseLevel PrintUsage _NestingState CheckIncludeLine CheckAccess _CppLintState Search CheckInvalidIncrement RemoveMultiLineCommentsFromRange CleansedLines CheckForBadCharacters UpdateIncludeState FindPreviousMatchingAngleBracket CheckEmptyBlockBody FindNextMultiLineCommentStart Match _NamespaceInfo CheckMakePairUsesDeduction CheckCheck IsBlankLine _SetFilters ProcessLine _FunctionState CheckPosixThreading GetLineWidth GetHeaderGuardCPPVariable IsCppString _IncludeState CheckSpacing _ClassInfo CheckForCopyright IsErrorSuppressedByNolint ProcessFileData CheckForMultilineCommentsAndStrings CloseExpression _PreprocessorInfo _OutputFormat CheckForIncludeWhatYouUse CheckSpacingForFunctionCall FindEndOfExpressionInLine FindNextMatchingAngleBracket _SetCountingStyle ProcessFile _IncludeError CleanseRawStrings CheckAltTokens CheckForNewlineAtEOF ParseArguments CheckForNonConstReference PrintCategories _Filters main FilesBelongToSameModule CheckCStyleCast FileInfo _BlockInfo CheckForHeaderGuard CheckCaffeDataLayerSetUp ReverseCloseExpression CleanseComments _DropCommonSuffixes _ClassifyInclude CheckStyle CheckCaffeAlternatives FindStartOfExpressionInLine _ShouldPrintError CheckComment Error _GetTextInside CheckLanguage CheckCaffeRandom GetPreviousNonBlankLine reporthook parse_readme_frontmatter model_checks_out valid_dirname train solve time resnet50 SAN MeanFieldUpdate _resnet_block _conv_bn_scale resnet50 SAN MeanFieldUpdate _resnet_block _conv_bn_scale main sub2ind read_file_data_new lin_interp convert_disps_to_depths_kitti read_calib_file generate_depth_map compute_errors read_file_data read_text_lines load_gt_disp_kitti load_velodyne_points get_focal_length_baseline fill_depth_colorization get_start_time extract_seconds extract_datetime_from_line get_log_created_year write_csv parse_log fix_initial_nan_learning_rate save_csv_files main parse_args parse_line_for_net_output ResizeCropImagesMapper PILResizeCrop OpenCVResizeCrop print_table printed_len summarize_net main read_net format_param imread urlretrieve Convolution InnerProduct Data SoftmaxWithLoss LRN Accuracy max_pool InnerProduct conv_relu fc_relu Dropout join list getElementsByTagName get_data_from_tag csr_matrix dict zip zeros float range enumerate len print format get read info load_image classify_image StringIO join replace info secure_filename save filename open_oriented_im classify_image fromarray replace astype save resize StringIO items list listen HTTPServer format print start WSGIContainer update start_tornado add_option OptionParser debug port parse_args ImagenetClassifier forward run hasattr _getexif astype float32 tile apply_orientation open transpose model_def endswith ArgumentParser save mean_file channel_swap output_file dirname expanduser parse_args input_file predict Classifier set_mode_cpu load time isdir print add_argument set_mode_gpu pretrained_model gpu len DataFrame Detector format to_hdf detect_selective_search mean set_index to_csv detect_windows read_csv add_argument ArgumentParser display_lrm read NetParameter output_image_file rankdir Merge TRAIN draw_net_to_file TEST Process str join init_log start append new_uid range log len before_backward layers display add_callback after_backward after_forward Timer append before_forward range len max_iter restore time set_solver_count set_solver_rank add_callback set_device set_multiprocess SGDSolver after_backward set_mode_gpu layer_wise_reduce step bcast NCCL len get params array get params array crop_params conv_params pop collect_bottoms add fn coord_map compose coord_map_from_to items list DESCRIPTOR batch_size str num_output getattr join get_layer_lr_mult name kernel_size stride get_pooling_types_dict pad any append type add_edge get_edge_label list Dot exclude get_layer_label add_node values choose_color_by_layertype Edge Node bottom append type layer include top data array diff shape BlobProto extend flat extend BlobProtoVector ParseFromString BlobProtoVector extend tostring shape Datum flat data len astype float32 tile zoom tuple resize fill empty array concatenate shape tile empty array LayerParameter list NetParameter _to_proto extend Counter OrderedDict values iteritems hasattr isinstance extend add getattr setattr list OrderedDict _blobs _blob_names zip list _blob_loss_weights OrderedDict _blob_names zip _layer_names list layers OrderedDict zip OrderedDict list keys list keys iteritems layers index set outputs _forward len iteritems _backward layers inputs index set len iteritems asarray extend copy next _batch itervalues forward len iteritems asarray backward extend copy next _batch itervalues zip_longest zip forward len ascontiguousarray concatenate itervalues zeros next range len data Pooling pool Convolution NetSpec Deconvolution conv Input join walk dirname abspath NamedTemporaryFile str close write data Pooling pool1 conv2 pool2 ip1 relu1 SoftmaxWithLoss Convolution NetSpec DummyData ip2 ReLU InnerProduct label conv1 Pooling SoftmaxWithLoss Convolution DummyData ReLU InnerProduct data NetSpec DummyData Silence data2 error search add group clear compile compile compile SetOutputFormat SetCountingStyle SetFilters _Filters startswith IsErrorSuppressedByNolint _ShouldPrintError write IncrementErrorCount replace append Match group find startswith endswith range error FindNextMultiLineCommentEnd RemoveMultiLineCommentsFromRange FindNextMultiLineCommentStart rstrip find range len FindEndOfExpressionInLine range len FindStartOfExpressionInLine error min search I range len FileInfo RepositoryName sep sub ParseNolintSuppressions error startswith split GetHeaderGuardCPPVariable enumerate error enumerate error len error replace count error find error find error find error find error Search error match InnermostClass replace error escape Match Search error group Search Check error lines Count End group Begin NumLines Match raw_lines range Search error match group error Match group pop group append Search pop group append Search elided replace CheckSpacingForFunctionCall rfind error len group min CloseExpression NumLines sub find CheckComment Match range Search lines_without_raw_strings error group starting_linenum Match range Search error rfind len group ReverseCloseExpression Search Match CloseExpression find error Match CloseExpression find elided error strip group FindEndOfExpressionInLine find Match range CloseExpression len error Match finditer normalize isinstance PY2 GetLineWidth int InnermostClass CheckCheck error CheckAltTokens CheckBraces CheckSpacing CheckSectionSpacing CheckEmptyBlockBody CheckAccess GetHeaderGuardCPPVariable lines_without_raw_strings _DropCommonSuffixes RepositoryName match split CheckNextIncludeOrder CanonicalizeAlphabeticalOrder FileInfo error search group SetLastHeader match _ClassifyInclude Match pop end search set itervalues append M rstrip replace CheckCStyleCast error _GetTextInside CheckIncludeLine search group lstrip startswith Match ResetSection Search split rfind error group ReverseCloseExpression lstrip findall Match range Search ReplaceAll error Match Search endswith replace setdefault group search CleanseComments open list FilesBelongToSameModule error search copy sub NumLines FullName keys range error search CheckPosixThreading ParseNolintSuppressions CheckVlogArguments CheckMakePairUsesDeduction CheckCaffeDataLayerSetUp CheckLanguage CheckInvalidIncrement CheckCaffeRandom CheckForNonConstReference check_fn Update CheckForNonStandardConstructs CheckStyle raw_lines CheckForMultilineCommentsAndStrings CheckCaffeAlternatives CheckForFunctionLengths CleansedLines _NestingState CheckForBadCharacters CheckForNewlineAtEOF _IncludeState RemoveMultiLineComments CheckForCopyright ResetNolintSuppressions CheckForHeaderGuard NumLines CheckCompletedBlocks CheckForIncludeWhatYouUse range ProcessLine _FunctionState Error rstrip endswith len write ProcessFileData _SetVerboseLevel range split write exit join write exit _VerboseLevel int getopt _SetOutputFormat set _SetVerboseLevel PrintCategories _SetFilters _OutputFormat PrintUsage _SetCountingStyle split getreader ParseArguments ResetErrorCounts stderr exit verbose_level PrintErrorCounts StreamReaderWriter ProcessFile getwriter PY2 int time write flush load join index copy_from Convolution Scale BatchNorm _conv_bn_scale format Eltwise ReLU res4c_relu ReLU _resnet_block res3d_relu Pooling pool1 res2b_relu scale_conv1 res2c_relu res5a_relu res3c_relu res2a_relu res4b_relu res3b_relu res4e_relu res4f_relu res5b_relu conv1_relu res3a_relu res4a_relu res4d_relu _conv_bn_scale format Convolution Scale Concat Sigmoid Eltwise prediction_map_2 updated_f3_mf2 updated_f1_mf3 res5c_dec updated_f3_mf4 ReLU updated_f1_mf2 res3d_relu res4f_dec_1_relu prediction_map_2_relu res5c_relu res5c_dec_1 updated_f3_mf1 updated_f3_mf5 prediction_map_1 MeanFieldUpdate Interp updated_f1_mf4 res5c_dec_1_relu res4f_dec_1 res3d_dec updated_f2_mf5 Convolution resnet50 res4f_dec Deconvolution updated_f2_mf1 updated_f2_mf3 updated_f2_mf4 updated_f3_mf3 res4f_relu updated_f1_mf1 prediction_map_1_relu updated_f2_mf2 updated_f1_mf5 resize generate_depth_map logical_and read_file_data shape min_depth read_text_lines append range pred_file astype copy kitti_dir max_depth compute_errors test_file_list float32 int32 zeros maximum mean sqrt abs log astype float32 zfill append imread range shape resize append range len readlines close open format print int32 isfile append split format int32 isfile append split reshape T arange LinearNDInterpolator reshape meshgrid set reshape read_calib_file int T sub2ind lin_interp read_calib_file reshape hstack min dot shape vstack round eye zeros load_velodyne_points nanmin flatten round max list exp dia_matrix shape spsolve sum range nanmax product COLOR_BGR2GRAY copy coo_matrix enumerate var time reshape min nanmean zeros cvtColor int rfind datetime split getctime year strip extract_datetime_from_line get_start_time total_seconds strip write get_log_created_year close extract_datetime_from_line open float get_log_created_year compile fix_initial_nan_learning_rate search group OrderedDict append float join basename write_csv print excel parse_log save_csv_files output_dir logfile_path NetParameter decay_mult format name lr_mult append print zip len get join str format convolution_param list setdefault param kernel_size map set top bottom append type module layer enumerate print_table filename summarize_net read_net | danxuhk/StructuredAttentionDepthEstimation | 1,837 |
darioizzo/neurostability | ['imitation learning'] | ['On the stability analysis of deep neural network representations of an optimal state-feedback'] | pyquad/ode45.py pyquad/__init__.py pyquad/nn_controller.py Controller _rkf45_stepper rk4_fixed rkf45 _rkf45_stepper_gdual rkf45_gduals norm f f append _rkf45_stepper str progressbar rkf45 print _rkf45_stepper_gdual append diff len arange progressbar f array range | # Stability study of deep networks controlling nonlinear ODEs This project contains all the python code and pickled network data to reproduce the results in the paper: Dario Izzo, Dharmesh Tailor and Thomas Vasileiou: "On the stability analysis of optimal state feedbacks as represented by deep neural models" - [https://arxiv.org/abs/1812.02532] **Usage**: clone the repository and run the notebooks in Jupyter after having installed (```pip``` or ```conda```) the requested dependencies. # Guidance and Control Network (G&CNET) **Definition:**: A G&CNET (Guidance and Control Network) is a type of neuro controller, in particular a deep, artificial, feed forward neural network trained on state-action pairs representing the optimal control actions, with respect to the performance index: <p align="center"> <a href="https://www.codecogs.com/eqnedit.php?latex=J&space;=&space;\int_{t_0}^{t_f}&space;l(\mathbf&space;x,&space;\mathbf&space;u,&space;t)&space;dt" target="_blank"><img src="https://latex.codecogs.com/gif.latex?J&space;=&space;\int_{t_0}^{t_f}&space;l(\mathbf&space;x,&space;\mathbf&space;u,&space;t)&space;dt" title="J = \int_{t_0}^{t_f} l(\mathbf x, \mathbf u, t) dt" /></a> </p> In other words, a G&CNET is a **neural optimal feedback** for a non linear dynamical system. | 1,838 |
darthdeus/master-thesis-code | ['gaussian processes'] | ['Dealing with Integer-valued Variables in Bayesian Optimization with Gaussian Processes'] | experiments/simple_function.py bopt/cli/run.py bopt/cli/cli.py bopt/cli/init.py bopt/cli/plot.py bopt/models/model.py bopt/experiment.py bopt/cli/convert_meta.py tests/opt_functions.py tests/test_bounds.py bopt/cli/manual_run.py bopt/cli/jobstat.py bopt/runner/sge_runner.py bopt/basic_types.py bopt/hyperparam_values.py experiments/simple_function2d.py bopt/cli/exp.py bopt/models/random_search.py bopt/cli/suggest.py tests/test_todict_fromdict.py bopt/cli/run_single.py bopt/runner/job_loader.py bopt/runner/local_runner.py tests/test_opt_functions.py bopt/acquisition_functions/acquisition_functions.py bopt/cli/debug.py bopt/cli/web.py pajp.py bopt/runner/runner_loader.py bopt/plot.py bopt/cli/clean.py bopt/sample.py bopt/runner/abstract.py bopt/models/parameters.py bopt/models/gpy_model.py bopt/__init__.py benchmarks/run_single_fun.py bopt/cli/util.py bopt/gp_config.py experiments/rl/gym_evaluator.py experiments/rl/monte_carlo.py setup.py experiments/rl/cart_pole_evaluator.py benchmarks/measure_funs.py Hyperparameter OptimizationFailed LogscaleInt LogscaleFloat Float Discrete Integer Bound ExperimentStats Experiment NoAliasDumper GPConfig GPParam HyperparamValues plot_current plot_convergence acq_for_dims plot_objective plots base64_plot Sample maybe_timestamp_to_datetime CollectFlag maybe_datetime_to_timestamp SampleCollection ExpectedImprovement ProbabilityOfImprovement AcquisitionFunction run main run_main flag_to_int convert_collect_flag run run run run run run run try_start_job run run run ensure_meta_yml handle_cd acquire_lock handle_cd_revertible create_slice_2d create_slice_1d Slice2D create_gp_for_data Slice1D run GPyModel RoundingKernelWrapper Model ModelParameters RandomSearch Runner Job JobLoader LocalJob LocalRunner RunnerLoader SGERunner SGEJob environment GymEnvironment main Easom XSquared get_opt_test_functions Eggholder Beale OptFunction get_fun_by_name McCormick TestOptFunctions TestOptFunctions test_exp1 TestToDictFromDict clear BytesIO seek close tight_layout savefig show subplot len imshow figure enumerate subplot list plot xlabel ylabel accumulate title figure ravel range len model plot_objective GridSpec argmax max strftime savefig format close acquisition_fn lengthscale mkdir float join isinstance variance min hyperparameters figure GridSpecFromSubplotSpec add_subplot axhline plot_mean str list name set_xlabel tolist axvline Subplot append range plot_data plot set acq_for_dims suptitle subplots_adjust set_ylabel max reshape set_ylim set_xlim raw_call shape stack sqrt linspace meshgrid zeros contour full range predict len datetime isinstance datetime isinstance run_main join format acquisition_fn_names name add_argument add_parser set_defaults getenv parse_known_args config_params ArgumentParser startswith func parse_args kernel_names add_subparsers isinstance dict update mkdir print chdir dir Gamma gamma_a optimize prior_for_hyperparam format tolist unconstrain set_prior GPRegression info Matern52 gp_config gamma_b enumerate to_xy reshape grid predictive_samples_before create_gp_for_data is_logscale append zeros float full range predict to_xy append reshape grid tolist predictive_samples_before create_gp_for_data shape stack is_logscale meshgrid float full range predict len watch Server print serve port dirname Flask abspath wsgi_app actions episodes ArgumentParser seed environment render append parse_args range reversed item fill gamma float zeros print add_argument epsilon_final randint step epsilon get_opt_test_functions Namespace randn Experiment for_manual_run GPConfig mapping_from_vector LocalRunner uniform sin sample_params | # Bayesian Optimization of HyperParameters - `bopt` [](https://travis-ci.org/darthdeus/bopt) [](https://codeclimate.com/github/darthdeus/bopt/maintainability) [](https://codeclimate.com/github/darthdeus/bopt/test_coverage) [](https://pypi.org/project/bopt/) [](https://pypi.org/project/bopt/) Available commands: ```python # Create a new experiment. bopt init -C META_DIR # Start tuning hyperparameters. bopt run -C META_DIR # Get an overview status of an experiment. bopt exp -C META_DIR # Start web visualizations of the results. | 1,839 |
data-llectual/ml-dna | ['stochastic optimization'] | ['Adam: A Method for Stochastic Optimization'] | deepnet-examples/tensorflow/tf-reshape.py deepnet-examples/tensorflow/baselineModelAndTFModel.py deepnet-examples/tensorflow/simple_reg.py deepnet-examples/tensorflow/baseline_model.py deepnet-examples/tensorflow/tf-test.py build_model_and_train_baseline build_training_data build_and_train_tensorflow_model train_model_with_minibatch read_goog_sp500_data main build_model_and_train_baseline build_training_data build_and_train_tensorflow_model train_model_with_minibatch read_goog_sp500_data main main build_and_train_tensorflow_model create_training_dataframe train_model_with_minibatch main main print build_model_and_train_baseline build_and_train_tensorflow_model pct_change to_datetime sort_values read_csv print array read_goog_sp500_data len reshape build_training_data LinearRegression fit minimize Variable float32 matmul placeholder square reduce_mean train_model_with_minibatch zeros ceil global_variables_initializer plot randn concat linspace DataFrame len create_training_dataframe reshape len print global_variables_initializer constant reshape | # ml-dna This repo will cover different machine learning problems and its solution. Solution to the problems will mostly be in python and explanation will be in source files Problem no 1: source directory : /sales-generation/ Design and Develop ML based sales lead generation system. 1) DataScience-Problem.docx => This file provides detailed description of the problem. 2) DataScience-Problem-Data.csv => This file contains data to train and test ml models. 3) ml-sales-lead.ipynb => solution for the problem in iPython notebook. 4) ML-based-lead-generation.html => solution in html format with detailed analysis of the problem. | 1,840 |
data61/aboleth | ['active learning'] | ['Noise Contrastive Priors for Functional Uncertainty'] | demos/regression_keras.py demos/regression_tutorial.py tests/test_baselayers.py aboleth/impute.py tests/test_losses.py aboleth/random.py tests/test_distributions.py tests/util.py aboleth/hlayers.py aboleth/baselayers.py tests/test_hlayers.py docs/conf.py tests/test_prediction.py tests/test_initialisers.py tests/test_utils.py aboleth/initialisers.py demos/sarcos.py aboleth/distributions.py aboleth/layers.py aboleth/prediction.py aboleth/util.py aboleth/version.py tests/test_kernels.py tests/test_layers.py demos/regression.py tests/conftest.py demos/imputation.py aboleth/__init__.py demos/mnist_softmax_regression.py aboleth/kernels.py demos/classification.py setup.py tests/test_impute.py aboleth/datasets.py demos/multi_input.py aboleth/losses.py MultiLayerComposite _stack2 stack LayerComposite MultiLayer Layer gp_draws fetch_gpml_sarcos_data _add_suffix norm_prior norm_posterior gaus_posterior _chollogdet kl_sum _kl_gaussian_normal Sum PerFeature Concat MeanImpute NormalImpute MaskInputLayer ImputeOp3 ExtraCategoryImpute ImputeColumnWise ScalarImpute _glorot_std _autonorm_std initialise_stds initialise_weights RBF _init_lenscale Matern RBFVariational ShiftInvariant NCPContinuousPerturb DenseNCP Conv2DVariational MaxPool2D _is_dim DropOut _make_prior Activation Flatten _sample_W SampleLayer3 Embed _l1_loss EmbedVariational Dense InputLayer _tile2samples SampleLayer RandomFourier _make_posterior DenseVariational NCPCategoricalPerturb Conv2D RandomArcCosine elbo max_posterior sample_percentiles sample_mean sample_model SeedGenerator set_hyperseed endless_permutations batch_prediction pos_variable summary_scalar _inverse_softplus __data_len summary_histogram batch main print_k_result print_final_result main batch_training main fetch_data main input_fn main batch_training WrapperLayer main batch_training svr linear nnet_dropout batch_training nnet_ncp nnet_bayesian nnet main bayesian_linear deep_gaussian_process gaussian_process r2_metric train_input_fn test_input_fn predict_input_fn get_data my_model main data make_categories make_missing_categories random make_missing_data make_image_data make_data make_graph test_stack_layer test_stack2 test_stack_multilayer test_stack2_multi logdet test_chollogdet KLdiv test_kl_gaussian_normal random_chol test_kl_normal_normal test_kl_gaussian_gaussian test_sum test_concat test_perfeature test_normal_impute test_scalar_impute test_mask_input test_extra_category_impute test_mean_impute test_autonorm_std test_initialise_stds test_initialise_weights test_glorot_std test_shift_invariant_kernels test_ARD_lenscales test_conv2d_distribution test_activation _make_placeholders test_fourier_features test_dense_outputs test_dense_embeddings test_flatten test_sample_layer_input_exception test_max_pooling2d test_arc_cosine test_embeddings_distribution test_ncp_cat_input_samples test_ncp_output test_net_outputs test_ncp_con_input_samples test_input_sample test_dropout test_input test_dense_distribution test_conv2d_outputs test_map_likelihood test_elbo_likelihood test_categorical_likelihood test_sample_mean test_sample_model test_sample_model_nodefaults test_sample_quantiles test_batch_predict test_batch StringLayer StringMultiLayer LayerComposite MultiLayerComposite isinstance svd kern RandomState randn astype float32 dot vstack next get join urlretrieve Bunch zeros Normal pos_variable Variable ones Normal summary_histogram zeros T tril_indices Variable reshape transpose scatter_nd MultivariateNormalTriL summary_histogram zeros kl_divergence reduce_sum to_float batch_shape_tensor identity reduce_sum mean event_shape_tensor scale to_dense _chollogdet log maximum reduce_sum matrix_diag_part abs log sqrt sqrt fn isinstance pos_variable isinstance summary_scalar astype float32 fn pos_variable summary_scalar ones astype float32 summary_histogram shape expand_dims tile len abs reduce_sum tuple shape list transpose shape add_to_collection sample range len norm_prior norm_posterior gaus_posterior to_float squeeze reduce_sum squeeze reduce_mean reduce_mean stack list get_default_session get_collection dict get_default_graph zip run permutation next RandomState Variable softplus _inverse_softplus array endless_permutations __data_len array_split arange __data_len round max histogram replace replace scalar exp log load_breast_cancer astype float32 shape RandomForestClassifier global_variables_initializer KFold format print log_loss append accuracy_score argmax print format std mean zeros_like target accuracy_score argmax Activation ScalarImpute LoggingTensorHook log_loss train_test_split fit_transform pos_variable format RandomState MeanImpute fetch_covtype selu Concat MaskInputLayer NormalImpute InputLayer print Variable confusion_matrix DenseVariational int32 zeros placeholder_with_default array get_next batch asarray labels dict images get_next read_data_sets input_fn Bernoulli concat flatten PerFeature log_prob sample_mean placeholder hstack Dense net probs fetch_data elbo RandomFourier minimize AdamOptimizer split len urlretrieve name astype NamedTemporaryFile read_csv T astype int32 image gp_draws round show exp meshgrid amax mean int line reshape figure amin circle max_posterior log_prob Dense InputLayer net elbo pos_variable log_prob DenseVariational InputLayer net tanh max_posterior log_prob Dense InputLayer Activation net selu max_posterior log_prob Dense DropOut InputLayer Activation net elbo selu log_prob DenseVariational InputLayer Activation net elbo pos_variable NCPContinuousPerturb DenseNCP selu log_prob Dense InputLayer Activation net RBF RandomFourier relu Dense reduce_mean InputLayer abs net elbo pos_variable RBF RandomFourier log_prob DenseVariational InputLayer net elbo pos_variable RandomFourier log_prob DenseVariational InputLayer net rand f logical_and r2_score astype float32 StandardScaler mean shape transform fetch_gpml_sarcos_data fit_transform mean_squared_error reduce_mean input_layer Normal Activation log_prob sample_mean pos_variable selu Dense InputLayer net elbo RBF RandomFourier r2_metric minimize DenseVariational AdamOptimizer mean_squared_error scalar list evaluate concatenate train_input_fn numeric_column Estimator test_input_fn predict_input_fn get_data append train keys range predict T randn dot cast linspace tile expand_dims array randn reshape astype float32 dot linspace tile expand_dims ones astype float32 cast linspace tile expand_dims sum astype int32 choice astype int32 concatenate data float32 placeholder shape stack InputLayer len TestCase _stack2 TestCase _stack2 StringLayer stack TestCase StringLayer stack TestCase StringMultiLayer ones TestCase Normal zeros kl_sum sum prod log astype float32 TestCase KLdiv MultivariateNormalTriL Normal random_chol kl_sum astype float32 TestCase KLdiv MultivariateNormalTriL random_chol kl_sum TestCase _chollogdet sum random_chol seed array rvs shape zip catlayer InputLayer TestCase Concat list TestCase catlayer repeat PerFeature range Sum addlayer TestCase MaskInputLayer TestCase s impute MeanImpute TestCase TestCase shape impute set_hyperseed ScalarImpute TestCase shape impute sqrt set_hyperseed NormalImpute copy TestCase shape impute set_hyperseed ExtraCategoryImpute _glorot_std _autonorm_std dict MagicMock initialise_weights dict initialise_stds TestCase kern weights kern weights layers TestCase InputLayer TestCase s InputLayer placeholder_with_default TestCase s InputLayer NCPContinuousPerturb net TestCase NCPCategoricalPerturb InputLayer net TestCase DenseNCP NCPContinuousPerturb net_ncp astype float32 TestCase DenseVariational InputLayer net tanh Activation TestCase astype float32 TestCase set_hyperseed DropOut drop MaxPool2D TestCase max_pool TestCase reshape Flatten _make_placeholders TestCase TestCase repeat int32 _make_placeholders len _make_placeholders TestCase get TestCase ceil shape _make_placeholders _make_placeholders kern TestCase _make_placeholders shape TestCase _make_placeholders TestCase _make_placeholders int32 TestCase len tolist placeholder shape tile expand_dims elbo layers log_prob TestCase max_posterior layers log_prob TestCase elbo layers like prob max_posterior float32 placeholder TestCase shape stack Dense int32 zeros InputLayer log_prob len mean reshape TestCase sample_mean percentile reshape TestCase sample_percentiles Graph Graph list arange extend next batch batch_prediction arange | data61/aboleth | 1,841 |
datamllab/autokaggle | ['automl'] | ['AutoML using Metadata Language Embeddings'] | tests/common.py autokaggle/utils.py mkdocs/autogen.py autokaggle/tabular_supervised.py tests/test_tabular_supervised.py examples/tabular_regression.py tests/test_tabular_preprocessor.py setup.py examples/tabular_classification_multiclass.py autokaggle/tabular_preprocessor.py examples/tabular_classification_binary.py parallel_function TabularPreprocessor call_parallel TabularRegressor TabularClassifier TabularSupervised ensure_dir rand_temp_folder_generator temp_path_generator get_func_comments get_comments_str to_md delete_space extract_comments remove_next_line parse_func_string change_args_to_dict skip_space_line clean_dir test_extract_data_info test_fit_predict_evalute_regression test_fit_evalute_predict_multiclass_classification test_fit_evalute_predict_binary_classification list DataFrame min isnan zeros expand_dims abs keys range len append parallel_function makedirs join gettempdir join temp_path_generator ensure_dir digits ascii_uppercase append join strip split join split match delete_space strip match startswith split remove_next_line change_args_to_dict skip_space_line items list isinstance parse_func_string to_md get_docstring parse to_md get_docstring parse_func_string get_func_comments join get_comments_str close write walk open join remove isfile rmdir listdir TabularPreprocessor extract_data_info clean_dir concatenate n_num reshape n_cat random array TabularClassifier clean_dir concatenate print evaluate random randint array predict fit TabularClassifier clean_dir concatenate evaluate random randint array predict fit TabularRegressor clean_dir concatenate evaluate random randint array predict fit | # autokaggle [](https://travis-ci.org/datamllab/autokaggle) Automated Machine Learning (AutoML) for Kaggle Competition ### Automated tabular classifier tutorial. Class `TabularClassifier` and `TabularRegressor` are designed for automated generate best performance shallow/deep architecture for a given tabular dataset. (Currently, theis module only supports lightgbm classifier and regressor.) ```python clf = TabularClassifier(verbose=True) clf.fit(x_train, y_train, time_limit=12 * 60 * 60, data_info=datainfo) ``` | 1,842 |
datamllab/pyodds | ['outlier detection'] | ['PyODDS: An End-to-End Outlier Detection System'] | pyodds/algo/iforest.py test/pyodds/io_api_test.py pyodds/algo/pca.py pyodds/automl/cash.py pyodds/algo/cblof.py pyodds/algo/lof.py pyodds/algo/knn.py doc/source/conf.py test/IOTest/run_time_comparison.py test/demo.py test/IOTest/run_time_query.py pyodds/algo/algorithm_utils.py pyodds/algo/staticautoencoder.py pyodds/algo/autoencoder.py pyodds/utils/plotUtils.py pyodds/utils/utilities.py pyodds/automl/config_space.py pyodds/algo/dagmm.py test/IOTest/run_time_pandas.py pyodds/utils/importAlgorithm.py pyodds/algo/hbos.py pyodds/algo/robustcovariance.py pyodds/algo/base.py test/pyodds/function_api_test.py demo.py pyodds/algo/ocsvm.py setup.py pyodds/algo/lstmad.py pyodds/algo/lstmencdec.py test/IOTest/run_time_pandas_query.py pyodds/algo/sod.py pyodds/algo/luminolFunc.py TensorflowUtils deepBase PyTorchUtils AUTOENCODER AutoEncoderModule Base pairwise_distances_no_broadcast _pairwise_distances_no_broadcast_helper CBLOF DAGMMModule DAGMM invert_order _calculate_outlier_scores HBOS IFOREST KNN LOF LSTMSequence LSTMAD LSTMED LSTMEDModule luminolDet OCSVM PCA RCOV SOD StaticAutoEncoder Cash construct_search_space construct_classifier plot_predictions algorithm_selection visualize_outlierscore visualize_distribution_static visualize_distribution visualize_distribution_time_serie output_performance query_data insert_demo_data connect_server standardizer check_parameter str2bool insert_demo_data connect_server read query_demo_data connect_server test_static_api test_timestamp_api test_io_static test_io_time_serie test_function square digitize min log2 zeros range column_or_1d randint choice LSTMAD OCSVM LOF DAGMM StaticAutoEncoder luminolDet LSTMED print PCA SOD RCOV HBOS CBLOF KNN AUTOENCODER IFOREST list plot savefig range len show jointplot set savefig to_numpy fit_transform show set scatterplot savefig append to_numpy DataFrame fit_transform range len show DatetimeIndex set mean lineplot savefig DataFrame show FacetGrid arange sort map set dict scatter savefig append DataFrame range hlines len ones timedelta execute range datetime print recall_score precision_score f1_score accuracy_score max roc_auc_score cursor connect fetchall list reshape range description execute to_numpy DataFrame array fillna append len check_array fit print clock set_index to_datetime read_csv fetchall print description execute clock seed output_performance RandomState algorithm_selection print query_data close insert_demo_data decision_function connect_server clock predict fit seed output_performance RandomState algorithm_selection print query_data close insert_demo_data decision_function connect_server clock predict fit seed output_performance RandomState algorithm_selection visualize_outlierscore print query_data visualize_distribution insert_demo_data decision_function visualize_distribution_static connect_server clock predict fit seed output_performance RandomState algorithm_selection print query_data ts visualize_distribution_time_serie insert_demo_data decision_function connect_server clock predict fit check_parameter rand str2bool standardizer | # PyODDS [](https://travis-ci.com/datamllab/PyODDS) [](https://coveralls.io/github/datamllab/PyODDS?branch=master) [](https://pyodds.github.io/) [](https://www.codacy.com/manual/pyodds/PyODDS?utm_source=github.com&utm_medium=referral&utm_content=pyodds/PyODDS&utm_campaign=Badge_Grade) [](https://badge.fury.io/py/pyodds) Official Website: [http://pyodds.com/](http://pyodds.com/) ## **PyODDS** is an end-to end **Python** system for **outlier** **detection** with **database** **support**. PyODDS provides outlier detection algorithms which meet the demands for users in different fields, w/wo data science or machine learning background. PyODDS gives the ability to execute machine learning algorithms in-database without moving data out of the database server or over the network. It also provides access to a wide range of outlier detection algorithms, including statistical analysis and more recent deep learning based approaches. It is developed by [`DATA Lab`](http://faculty.cs.tamu.edu/xiahu/index.html) at Texas A&M University. PyODDS is featured for: | 1,843 |
datamllab/tods | ['outlier detection', 'time series', 'anomaly detection'] | ['Detecting Spacecraft Anomalies Using LSTMs and Nonparametric Dynamic Thresholding', 'TODS: An Automated Time Series Outlier Detection System'] | primitive_tests/feature_analysis/HPFilter_pipeline.py tods/detection_algorithm/PyodMoGaal.py tods/sk_interface/test/detection_algorithm/test_ski_LOF.py tods/tests/common/test_kfold_split.py tods/timeseries_processing/HoltSmoothing.py tods/sk_interface/script/feature_analysis_skinterface_generation.py tods/sk_interface/test/feature_analysis/test_ski_NonNegativeMatrixFactorization.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalGmean.py datasets/script/NAB_add_label.py primitive_tests/feature_analysis/StatisticalVariation_pipeline.py tods/feature_analysis/StatisticalMaximum.py tods/tests/feature_analysis/test_StatisticalZeroCrossing.py primitive_tests/detection_algorithm/LOF_pipeline.py tods/feature_analysis/StatisticalAbsEnergy.py examples/axolotl_interface/example_pipelines/script/build_AutoEncoder_pipeline.py tods/tests/sk_interface/detection_algorithm/test_ski_IsolationForest.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalStd.py tods/sk_interface/data_processing/TimeIntervalTransform_skinterface.py tods/common/TODSBasePrimitives.py tods/sk_interface/test/feature_analysis/test_ski_DiscreteCosineTransform.py datasets/validate.py tods/detection_algorithm/DeepLog.py tods/sk_interface/feature_analysis/SKTruncatedSVD_skinterface.py tods/sk_interface/detection_algorithm/LSTMODetector_skinterface.py tods/sk_interface/feature_analysis/NonNegativeMatrixFactorization_skinterface.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalMedianAbsoluteDeviation.py tods/feature_analysis/StatisticalVariation.py tods/tests/sk_interface/detection_algorithm/test_ski_So_Gaal.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalVecSum.py tods/data_processing/DatasetToDataframe.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalVariation.py tods/detection_algorithm/PyodKNN.py datasets/anomaly/transform_kpi.py tods/common/utils.py tods/common/TrainScoreSplit.py tods/tests/detection_algorithm/test_PyodIsolationForest.py tods/feature_analysis/MatrixProfile.py tods/data_processing/SKImputer.py tods/tests/detection_algorithm/test_PyodAE.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalStd.py tods/feature_analysis/StatisticalMinimum.py tods/sk_interface/timeseries_processing/TimeSeriesSeasonalityTrendDecomposition_skinterface.py examples/axolotl_interface/run_search.py tods/sk_interface/feature_analysis/StatisticalMeanAbs_skinterface.py tods/sk_interface/detection_algorithm/AutoRegODetector_skinterface.py primitive_tests/detection_algorithm/IsolationForest_pipline.py tods/tests/sk_interface/detection_algorithm/test_ski_LSTMODetector.py tods/sk_interface/test/feature_analysis/test_ski_AutoCorrelation.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalMeanAbsTemporalDerivative.py tods/tests/data_processing/test_DuplicationValidation.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalMinimum.py tods/sk_interface/test/detection_algorithm/test_ski_AutoRegODetector.py tods/timeseries_processing/SubsequenceSegmentation.py primitive_tests/timeseries_processing/QuantileTransform_pipeline.py tods/detection_algorithm/core/dagmm/dagmm.py tods/feature_analysis/StatisticalMeanTemporalDerivative.py tods/sk_interface/detection_algorithm/COF_skinterface.py tods/sk_interface/feature_analysis/StatisticalMinimum_skinterface.py tods/tests/sk_interface/feature_analysis/test_ski_FastFourierTransform.py tods/tests/sk_interface/feature_analysis/test_ski_HPFilter.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalAbsSum.py tods/tests/feature_analysis/test_NonNegativeMatrixFactorization.py tods/sk_interface/test/feature_analysis/test_ski_FastFourierTransform.py tods/tests/data_processing/test_ExtractColumnsBySemanticTypes.py tods/reinforcement/RuleBasedFilter.py tods/detection_algorithm/Ensemble.py tods/tests/feature_analysis/test_HPFilter.py tods/tests/sk_interface/feature_analysis/test_ski_WaveletTransform.py tods/sk_interface/feature_analysis/StatisticalKurtosis_skinterface.py tods/sk_interface/feature_analysis/DiscreteCosineTransform_skinterface.py primitive_tests/feature_analysis/StatisticalVecSum_pipeline.py tods/sk_interface/test/detection_algorithm/test_ski_LSTMODetector.py tods/schemas.py tods/feature_analysis/AutoCorrelation.py tods/common/KFoldSplit.py tods/detection_algorithm/PyodSoGaal.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalMean.py tods/sk_interface/test/feature_analysis/test_ski_SpectralResidualTransform.py tods/common/KFoldSplitTimeseries.py tods/timeseries_processing/TimeSeriesSeasonalityTrendDecomposition.py tods/sk_interface/data_processing/ContinuityValidation_skinterface.py tods/tests/data_processing/test_DatasetToDataFrame.py tods/feature_analysis/SKTruncatedSVD.py tods/data_processing/TimeStampValidation.py tods/tests/timeseries_processing/run_tests.py tods/detection_algorithm/PCAODetect.py tods/tests/feature_analysis/test_StatisticalMeanAbsTemporalDerivative.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalZeroCrossing.py primitive_tests/feature_analysis/StatisticalGmean_pipeline.py primitive_tests/detection_algorithm/DeepLog_pipeline.py tods/detection_algorithm/core/dagmm/compression_net.py tods/sk_interface/test/feature_analysis/test_ski_BKFilter.py tods/tests/feature_analysis/test_MatrixProfile_feature.py tods/tests/detection_algorithm/run_tests.py primitive_tests/detection_algorithm/LSTMOD_pipeline.py tods/detection_algorithm/core/utility.py tods/tests/common/test_redact_columns.py tods/tests/sk_interface/feature_analysis/test_ski_NonNegativeMatrixFactorization.py tods/sk_interface/test/detection_algorithm/test_ski_PCAODetector.py primitive_tests/data_processing/ColumnFilter_pipeline.py tods/sk_interface/feature_analysis/StatisticalMeanTemporalDerivative_skinterface.py tods/tests/feature_analysis/test_FastFourierTransform.py tods/tests/timeseries_processing/test_SKAxiswiseScaler.py tods/sk_interface/feature_analysis/AutoCorrelation_skinterface.py tods/tests/sk_interface/detection_algorithm/test_ski_LOF.py tods/searcher/brute_force_search.py tods/__init__.py tods/sk_interface/script/feature_analysis_skitest_generation.py tods/feature_analysis/StatisticalKurtosis.py tods/data_processing/ColumnFilter.py tods/sk_interface/feature_analysis/SpectralResidualTransform_skinterface.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalVecSum.py primitive_tests/feature_analysis/StatisticalMaximum_pipeline.py primitive_tests/detection_algorithm/KDiscord_pipeline.py primitive_tests/detection_algorithm/PyodSoGaal_pipeline.py primitive_tests/detection_algorithm/HBOS_pipline.py tods/tests/timeseries_processing/test_HoltSmoothing.py datasets/tods_datasets.py tods/timeseries_processing/HoltWintersExponentialSmoothing.py tods/tests/feature_analysis/test_WaveletTransformer.py primitive_tests/feature_analysis/NonNegativeMatrixFactorization_pipeline.py tods/tests/detection_algorithm/test_PyodVAE.py tods/detection_algorithm/PyodCBLOF.py tods/tests/detection_algorithm/test_DeepLog.py tods/tests/sk_interface/detection_algorithm/test_ski_Telemanom.py tods/tests/sk_interface/detection_algorithm/test_ski_AutoEncoder.py tods/sk_interface/base.py tods/sk_interface/test/detection_algorithm/test_ski_SystemWiseDetection.py primitive_tests/feature_analysis/FastFourierTransform_pipeline.py tods/sk_interface/test/detection_algorithm/test_ski_COF.py tods/sk_interface/detection_algorithm/LODA_skinterface.py tods/sk_interface/data_processing/DuplicationValidation_skinterface.py tods/sk_interface/feature_analysis/__init__.py primitive_tests/timeseries_processing/AxiswiseScale_pipeline.py tods/detection_algorithm/core/KDiscord.py tods/sk_interface/detection_algorithm/VariationalAutoEncoder_skinterface.py primitive_tests/detection_algorithm/Telemanom_pipeline.py tods/sk_interface/test/feature_analysis/test_ski_WaveletTransform.py tods/sk_interface/timeseries_processing/SubsequenceSegmentation_skinterface.py tods/sk_interface/feature_analysis/BKFilter_skinterface.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalAbsEnergy.py tods/sk_interface/utils/data.py tods/tests/sk_interface/detection_algorithm/test_ski_PCAODetector.py tods/sk_interface/script/timeseries_processing_skinterface_generation.py tods/tests/detection_algorithm/test_PyodHBOS.py tods/sk_interface/feature_analysis/StatisticalMedian_skinterface.py primitive_tests/feature_analysis/Statistical_mean_absTemporalDerivative_pipeline.py tods/tests/sk_interface/utils/data.py tods/sk_interface/test/detection_algorithm/test_ski_VariationalAutoEncoder.py tods/feature_analysis/StatisticalStd.py tods/sk_interface/timeseries_processing/HoltSmoothing_skinterface.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalMean.py examples/axolotl_interface/run_pipeline_system.py tods/tests/timeseries_processing/test_SubsequenceSegmentation.py tods/sk_interface/test/detection_algorithm/test_ski_HBOS.py tods/feature_analysis/NonNegativeMatrixFactorization.py tods/common/RedactColumns.py primitive_tests/detection_algorithm/ABOD_pipeline.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalMinimum.py docs/conf.py examples/axolotl_interface/test_axolotl.py tods/sk_interface/script/detection_algorithm_skitest_generation.py tods/tests/feature_analysis/test_StastiticalStd.py tods/sk_interface/detection_algorithm/LOF_skinterface.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalVar.py tods/sk_interface/detection_algorithm/AutoEncoder_skinterface.py examples/axolotl_interface/run_pipeline_ensemble.py tods/detection_algorithm/PyodHBOS.py tods/tests/sk_interface/detection_algorithm/test_ski_ABOD.py tods/sk_interface/feature_analysis/StatisticalAbsEnergy_skinterface.py tods/detection_algorithm/PyodLOF.py tods/sk_interface/detection_algorithm/ABOD_skinterface.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalMaximum.py tods/tests/timeseries_processing/test_SimpleExponentialSmoothing.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalKurtosis.py tods/sk_interface/feature_analysis/StatisticalSkew_skinterface.py primitive_tests/feature_analysis/StatisticalStd_pipeline.py tods/sk_interface/data_ensemble/__init__.py primitive_tests/detection_algorithm/SOD_pipeline.py tods/tests/data_processing/test_ColumnFilter.py tods/detection_algorithm/core/dagmm/gmm.py tods/sk_interface/detection_algorithm/So_Gaal_skinterface.py tods/sk_interface/detection_algorithm/Mo_Gaal_skinterface.py tods/tests/detection_algorithm/test_PyodCBLOF.py tods/tests/detection_algorithm/test_PyodLODA.py tods/sk_interface/feature_analysis/HPFilter_skinterface.py tods/sk_interface/timeseries_processing/MovingAverageTransformer_skinterface.py tods/sk_interface/script/detection_algorithm_skinterface_generation.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalHmean.py tods/feature_analysis/StatisticalGmean.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalMedianAbsoluteDeviation.py tods/data_processing/__init__.py primitive_tests/feature_analysis/StatisticalMean_pipeline.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalMeanAbs.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalSkew.py tods/tests/detection_algorithm/test_KDiscordODetect.py tods/tests/detection_algorithm/test_PyodLOF.py tods/detection_algorithm/PyodLODA.py datasets/dataset_utils.py tods/detection_algorithm/PyodSOD.py tods/detection_algorithm/UODBasePrimitive.py tods/tests/sk_interface/detection_algorithm/test_ski_SOD.py tods/tests/common/test_train_score_split.py tods/sk_interface/timeseries_processing/SimpleExponentialSmoothing_skinterface.py tods/feature_analysis/SpectralResidualTransform.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalHmean.py tods/detection_algorithm/MatrixProfile.py tods/tests/feature_analysis/test_StatisticalMedianAbsoluteDeviation.py primitive_tests/feature_analysis/TRMF_pipeline.py tods/sk_interface/test/detection_algorithm/test_ski_Telemanom.py tods/sk_interface/feature_analysis/TRMF_skinterface.py primitive_tests/reinforcement/RuleBasedFilter_pipline.py tods/sk_interface/feature_analysis/StatisticalMeanAbsTemporalDerivative_skinterface.py tods/tests/data_processing/test_ColumnParser.py primitive_tests/data_processing/DuplicationValidation_pipeline.py tods/common/FixedSplit.py tods/sk_interface/feature_analysis/StatisticalAbsSum_skinterface.py tods/utils.py tods/detection_algorithm/SystemWiseDetection.py tods/feature_analysis/StatisticalHmean.py primitive_tests/timeseries_processing/PowerTransform_pipeline.py primitive_tests/feature_analysis/StatisticalMinimum_pipeline.py tods/detection_algorithm/__init__.py tods/sk_interface/test/detection_algorithm/test_ski_So_Gaal.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalZeroCrossing.py tods/sk_interface/detection_algorithm/SOD_skinterface.py tods/searcher/__init__.py primitive_tests/timeseries_processing/SimpleExponentialSmoothing_pipeline.py examples/sk_examples/MatrixProfile_test.py tods/sk_interface/detection_algorithm/HBOS_skinterface.py tods/timeseries_processing/SKPowerTransformer.py tods/detection_algorithm/AutoRegODetect.py tods/tests/feature_analysis/test_StatisticalMean.py tods/tests/common/test_denormalize.py tods/tests/sk_interface/detection_algorithm/test_ski_LODA.py tods/feature_analysis/HPFilter.py tods/tests/sk_interface/detection_algorithm/test_ski_MatrixProfile.py tods/sk_interface/detection_algorithm/PCAODetector_skinterface.py tods/sk_interface/feature_analysis/StatisticalHmean_skinterface.py tods/sk_interface/detection_algorithm/__init__.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalGmean.py tods/tests/sk_interface/detection_algorithm/test_ski_KNN.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalMaximum.py tods/tests/feature_analysis/run_tests.py tods/sk_interface/timeseries_processing/SKQuantileTransformer_skinterface.py examples/sk_examples/IsolationForest_test.py tods/data_processing/ColumnParser.py primitive_tests/feature_analysis/StatisticalAbsEnergy_pipeline.py tods/tests/feature_analysis/test_DiscreteCosineTransform.py primitive_tests/data_processing/CategoricalToBinary_pipeline.py tods/sk_interface/test/feature_analysis/run_tests.py tods/tests/data_processing/utils.py primitive_tests/feature_analysis/StatisticalZeroCrossing_pipeline.py tods/timeseries_processing/MovingAverageTransformer.py primitive_tests/data_processing/TimeIntervalTransform_pipeline.py tods/tests/sk_interface/feature_analysis/test_ski_TRMF.py tods/tests/sk_interface/detection_algorithm/run_tests.py tods/common/NoSplit.py tods/tests/detection_algorithm/test_PyodOCSVM.py primitive_tests/feature_analysis/SpectralResidualTransform_pipeline.py tods/tests/timeseries_processing/test_TimeSeriesSeasonalityTrendDecomposition.py tods/tests/common/test_no_split.py tods/sk_interface/feature_analysis/StatisticalGmean_skinterface.py tods/tests/feature_analysis/test_SKTruncatedSVD.py primitive_tests/feature_analysis/StatisticalMedian_pipeline.py tods/detection_algorithm/core/utils/modeling.py tods/tests/detection_algorithm/test_MatrixProfile.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalMedian.py tods/sk_interface/test/detection_algorithm/test_ski_KNN.py tods/sk_interface/detection_algorithm/MatrixProfile_skinterface.py tods/tests/timeseries_processing/test_SKStandardizer.py primitive_tests/timeseries_processing/Standardize_pipeline.py tods/sk_interface/test/feature_analysis/test_ski_SKTruncatedSVD.py tods/feature_analysis/StatisticalVecSum.py tods/tests/feature_analysis/test_StatisticalAbsEnergy.py setup.py tods/sk_interface/test/detection_algorithm/test_ski_DeepLog.py tods/tests/feature_analysis/test_StatisticalVariation.py tods/sk_interface/feature_analysis/StatisticalStd_skinterface.py tods/detection_algorithm/KDiscordODetect.py primitive_tests/timeseries_processing/SeasonalityTrendDecomposition_pipeline.py primitive_tests/feature_analysis/BKFilter_pipeline.py tods/timeseries_processing/SKAxiswiseScaler.py tods/detection_algorithm/core/PCA.py datasets/anomaly/transform_yahoo.py primitive_tests/detection_algorithm/MatrixProfile_pipeline.py tods/feature_analysis/DiscreteCosineTransform.py tods/sk_interface/feature_analysis/WaveletTransform_skinterface.py tods/tests/data_processing/test_TimeIntervalTransform.py tods/sk_interface/detection_algorithm/CBLOF_skinterface.py tods/detection_algorithm/core/AutoRegOD.py primitive_tests/detection_algorithm/HBOS_score_pipeline.py tods/sk_interface/__init__.py tods/tests/sk_interface/feature_analysis/test_ski_AutoCorrelation.py tods/detection_algorithm/core/utils/errors.py tods/sk_interface/data_processing/TimeStampValidation_skinterface.py datasets/hub.py tods/tests/timeseries_processing/utils.py tods/tests/data_processing/test_SKImputer.py primitive_tests/feature_analysis/WaveletTransform_pipeline.py tods/tests/common/test_fixed_split.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalAbsEnergy.py tods/tests/sk_interface/detection_algorithm/test_ski_AutoRegODetector.py tods/tests/utils.py primitive_tests/feature_analysis/StatisticalSkew_pipeline.py tods/detection_algorithm/core/dagmm/estimation_net.py tods/tests/detection_algorithm/utils.py primitive_tests/feature_analysis/StatisticalWillisonAmplitude_pipeline.py tods/tests/detection_algorithm/test_PyodCOF.py tods/sk_interface/test/feature_analysis/test_ski_TRMF.py tods/tests/feature_analysis/test_Autocorrelation.py tods/common/CSVReader.py tods/sk_interface/feature_analysis/StatisticalVar_skinterface.py tods/sk_interface/test/detection_algorithm/test_ski_CBLOF.py tods/tests/sk_interface/detection_algorithm/test_ski_DeepLog.py tods/sk_interface/feature_analysis/StatisticalVariation_skinterface.py tods/data_processing/TimeIntervalTransform.py tods/sk_interface/feature_analysis/StatisticalZeroCrossing_skinterface.py tods/sk_interface/data_ensemble/Ensemble_skinterface.py tods/sk_interface/data_processing/CategoricalToBinary_skinterface.py tods/sk_interface/feature_analysis/StatisticalWillisonAmplitude_skinterface.py tods/feature_analysis/StatisticalMedian.py tods/tests/feature_analysis/test_StatisticalVecSum.py tods/sk_interface/timeseries_processing/SKStandardScaler_skinterface.py tods/timeseries_processing/SKStandardScaler.py tods/tests/feature_analysis/test_StatisticalSkew.py tods/data_processing/utils.py tods/tests/sk_interface/feature_analysis/test_ski_SKTruncatedSVD.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalMeanTemporalDerivative.py primitive_tests/timeseries_processing/HoltWintersExponentialSmoothing_pipeline.py tods/sk_interface/timeseries_processing/HoltWintersExponentialSmoothing_skinterface.py tods/tests/data_processing/test_CategoricalBinary.py tods/detection_algorithm/core/LSTMOD.py tods/tests/detection_algorithm/test_AutoRegODetect.py tods/tests/detection_algorithm/test_PyodSOD.py primitive_tests/detection_algorithm/LODA_pipeline.py tods/tests/feature_analysis/test_StatisticalMeanTemporalDerivative.py examples/axolotl_interface/run_pipeline.py primitive_tests/timeseries_processing/HoltSmoothing_pipeline.py tods/tests/feature_analysis/test_StatisticalKurtosis.py tods/tests/sk_interface/detection_algorithm/test_ski_CBLOF.py tods/sk_interface/feature_analysis/StatisticalMedianAbsoluteDeviation_skinterface.py tods/feature_analysis/BKFilter.py examples/sk_examples/DeepLog_test.py tods/sk_interface/feature_analysis/FastFourierTransform_skinterface.py tods/detection_algorithm/core/dagmm/__init__.py tods/sk_interface/test/detection_algorithm/test_ski_OCSVM.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalWillisonAmplitude.py tods/feature_analysis/StatisticalAbsSum.py tods/tests/feature_analysis/test_StatisticalMeanAbs.py tods/tests/feature_analysis/test_StatisticalWillisonAmplitude.py tods/tests/feature_analysis/test_StatisticalGmean.py tods/sk_interface/test/detection_algorithm/test_ski_ABOD.py tods/data_processing/DuplicationValidation.py tods/sk_interface/test/detection_algorithm/run_tests.py tods/tests/data_processing/run_tests.py primitive_tests/data_processing/ContinuityValidation_pipline.py tods/sk_interface/detection_algorithm/Telemanom_skinterface.py tods/tests/common/run_tests.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalMeanAbsTemporalDerivative.py tods/feature_analysis/TRMF.py tods/detection_algorithm/core/MultiAutoRegOD.py tods/tests/sk_interface/feature_analysis/test_ski_BKFilter.py primitive_tests/detection_algorithm/PCAODetect_pipeline.py tods/tests/detection_algorithm/test_PyodSoGaal.py tods/sk_interface/test/detection_algorithm/test_ski_IsolationForest.py tods/tests/sk_interface/feature_analysis/run_tests.py tods/detection_algorithm/core/CollectiveBase.py primitive_tests/detection_algorithm/CBLOF_pipline.py primitive_tests/feature_analysis/StatisticalMeanTemporalDerivative.py tods/tests/timeseries_processing/test_SKQuantileTransformer.py tods/feature_analysis/StatisticalMeanAbsTemporalDerivative.py tods/sk_interface/detection_algorithm/SystemWiseDetection_skinterface.py tods/sk_interface/test/system_detection/test_system_KNN.py tods/detection_algorithm/LSTMODetect.py tods/data_processing/CategoricalToBinary.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalVar.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalAbsSum.py tods/sk_interface/timeseries_processing/SKPowerTransformer_skinterface.py tods/feature_analysis/StatisticalZeroCrossing.py tods/tests/feature_analysis/test_StatisticalVar.py tods/sk_interface/timeseries_processing/SKAxiswiseScaler_skinterface.py tods/data_processing/ConstructPredictions.py tods/feature_analysis/StatisticalSkew.py tods/data_processing/ContinuityValidation.py tods/sk_interface/detection_algorithm/OCSVM_skinterface.py tods/tests/data_processing/test_ContinuityValidation.py primitive_tests/detection_algorithm/VariationalAutoEncoder_pipeline.py tods/sk_interface/test/detection_algorithm/test_ski_KDiscordODetector.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalSkew.py tods/tests/data_processing/test_ConstructPredictions.py tods/detection_algorithm/PyodABOD.py tods/feature_analysis/FastFourierTransform.py tods/sk_interface/data_processing/SKImputer_skinterface.py examples/sk_examples/Telemanom_test.py tods/sk_interface/script/data_processing_skinterface_generation.py tods/sk_interface/test/detection_algorithm/test_ski_MatrixProfile.py primitive_tests/detection_algorithm/OCSVM_pipline.py tods/tests/feature_analysis/test_StatisticalMedian.py tods/tests/sk_interface/detection_algorithm/test_ski_VariationalAutoEncoder.py tods/data_processing/ExtractColumnsBySemanticTypes.py tods/tests/detection_algorithm/test_Telemanom.py tods/tests/data_processing/test_TimeStampValidation.py tods/feature_analysis/WaveletTransform.py tods/tests/feature_analysis/test_StatisticalMaximum.py tods/common/Denormalize.py tods/detection_algorithm/PyodVAE.py tods/sk_interface/test/feature_analysis/test_ski_HPFilter.py primitive_tests/timeseries_processing/MeanAverageTransform_pipeline.py tods/tests/feature_analysis/test_StatisticalMinimum.py tods/detection_algorithm/PyodIsolationForest.py primitive_tests/detection_algorithm/PyodCOF.py tods/feature_analysis/__init__.py tods/tests/feature_analysis/test_BKFilter.py tods/timeseries_processing/SKQuantileTransformer.py tods/detection_algorithm/Telemanom.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalWillisonAmplitude.py tods/sk_interface/detection_algorithm/IsolationForest_skinterface.py tods/detection_algorithm/PyodAE.py tods/tests/detection_algorithm/test_DAGMM.py tods/tests/common/test_csv_reader.py tods/detection_algorithm/core/UODCommonTest.py tods/timeseries_processing/__init__.py tods/tests/feature_analysis/utils.py tods/detection_algorithm/core/utils/channel.py tods/tests/sk_interface/detection_algorithm/test_ski_HBOS.py tods/sk_interface/feature_analysis/StatisticalMaximum_skinterface.py tods/tests/run_tests.py tods/tests/sk_interface/detection_algorithm/test_ski_COF.py tods/feature_analysis/StatisticalMean.py tods/tests/sk_interface/feature_analysis/test_ski_StatisticalMeanTemporalDerivative.py tods/feature_analysis/StatisticalMeanAbs.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalKurtosis.py examples/sk_examples/LSTMOD_test.py primitive_tests/detection_algorithm/AutoRegODetect_pipeline.py tods/detection_algorithm/SystemWiseDetection_bkup.py primitive_tests/detection_algorithm/AutoEncoder_pipeline.py tods/detection_algorithm/DAGMM.py tods/tests/sk_interface/detection_algorithm/test_ski_Mo_Gaal.py tods/feature_analysis/StatisticalMedianAbsoluteDeviation.py primitive_tests/feature_analysis/StatisticalHmean_pipeline.py primitive_tests/feature_analysis/StatisticalMedianAbsoluteDeviation.py tods/tests/sk_interface/feature_analysis/test_ski_DiscreteCosineTransform.py primitive_tests/feature_analysis/DiscreteCosineTransform_pipeline.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalVariation.py tods/tests/detection_algorithm/test_PCAODetect.py tods/tests/detection_algorithm/test_PyodMoGaal.py tods/detection_algorithm/core/CollectiveCommonTest.py tods/sk_interface/detection_algorithm/DeepLog_skinterface.py tods/tests/detection_algorithm/test_PyodKNN.py tods/tests/feature_analysis/test_StatisticalHmean.py tods/sk_interface/detection_algorithm/KDiscordODetector_skinterface.py tods/tests/sk_interface/detection_algorithm/test_ski_KDiscordODetector.py tods/timeseries_processing/SimpleExponentialSmoothing.py primitive_tests/detection_algorithm/PyodMoGaal_pipeline.py tods/sk_interface/detection_algorithm/KNN_skinterface.py tods/sk_interface/test/detection_algorithm/test_ski_LODA.py tods/detection_algorithm/PyodCOF.py tods/tests/common/test_kfold_timeseries_split.py primitive_tests/feature_analysis/StatisticalAbsSum.py tods/feature_analysis/StatisticalVar.py tods/feature_analysis/StatisticalWillisonAmplitude.py primitive_tests/feature_analysis/StatisticalVar_pipeline.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalMeanAbs.py tods/tests/sk_interface/run_tests.py tods/tests/feature_analysis/test_SpectralResidualTransform.py tods/sk_interface/test/detection_algorithm/test_ski_AutoEncoder.py examples/axolotl_interface/example_pipelines/script/build_system_pipeline.py tods/sk_interface/feature_analysis/StatisticalMean_skinterface.py tods/tests/detection_algorithm/test_PyodABOD.py primitive_tests/detection_algorithm/KNN_pipeline.py tods/tests/feature_analysis/test_StatisticalAbsSum.py tods/tests/timeseries_processing/test_SKPowerTransformer.py tods/detection_algorithm/PyodOCSVM.py primitive_tests/feature_analysis/TruncatedSVD_pipeline.py tods/sk_interface/test/detection_algorithm/test_ski_Mo_Gaal.py tods/tests/feature_analysis/test_TRMF.py tods/detection_algorithm/core/test_CollectiveBase.py primitive_tests/feature_analysis/StatisticalKurtosis_pipeline.py primitive_tests/feature_analysis/StatisticalMeanAbs_pipeline.py datasets/tods_dataset_base.py tods/tests/sk_interface/detection_algorithm/test_ski_OCSVM.py tods/sk_interface/test/detection_algorithm/test_ski_SOD.py tods/tests/sk_interface/feature_analysis/test_ski_SpectralResidualTransform.py tods/sk_interface/test/feature_analysis/test_ski_StatisticalMedian.py tods/tests/detection_algorithm/test_LSTMODetector.py tods/tests/timeseries_processing/test_MovingAverageTransform.py tods/sk_interface/feature_analysis/StatisticalVecSum_skinterface.py tods/tests/timeseries_processing/test_HoltWintersExponentialSmoothing.py read_file_entry_points merge_entry_points _save_response_content extract_archive _quota_exceeded check_md5 verify_str_arg _is_tgz _is_targz list_dir _is_tarxz _is_zip download_file_from_google_drive download_url _get_confirm_token check_integrity iterable_to_str _is_tar list_files download_and_extract_archive gen_bar_updater calculate_md5 _is_gzip kpi_dataset NAB_dataset yahoo_dataset TODS_dataset validate validate_dataset_path map_dataset_id validate_keywords validate_files configure_parser validate_multi_index get_all_columns search_directory validate_target validate_metrics read_csv datasets_equal canonical_dataset_description validate_edgelist validate_collection handler get_file_extension main validate_dataset validate_dataset_reference validate_dataset_description validate_column_values validate_target_values validate_problem_description skip_member setup generate_metrics generate_dataset_problems test generate_data_preparation_pipeline generate_scoring_pipeline generate_data_preparation_params generate_pipelines load_default_pipeline evaluate_pipeline load_pipeline generate_problem generate_dataset CSVReaderPrimitive DenormalizePrimitive Hyperparams FixedSplitDatasetSplitPrimitive Hyperparams KFoldDatasetSplitPrimitive Hyperparams Hyperparams KFoldTimeSeriesSplitPrimitive NoSplitDatasetSplitPrimitive Hyperparams Hyperparams RedactColumnsPrimitive TODSUnsupervisedLearnerPrimitiveBase TODSTransformerPrimitiveBase Hyperparams TrainScoreDatasetSplitPrimitive append_columns get_column_index_from_column_name horizontal_concat insert_columns_metadata parse_datetime_to_float parse_datetime replace_columns_metadata build_relation_graph copy_elements_metadata select_columns_metadata replace_columns get_tabular_resource_metadata select_columns remove_columns_metadata list_columns_with_structural_types get_index_columns set_table_metadata get_columns_to_use get_tabular_resource combine_columns combine_columns_metadata list_columns_with_semantic_types horizontal_concat_metadata copy_metadata remove_columns insert_columns cut_dataset append_columns_metadata Cat2B Hyperparams CategoricalToBinaryPrimitive ColumnFilterPrimitive Hyperparams Hyperparams ColumnParserPrimitive Hyperparams ConstructPredictionsPrimitive Hyperparams ContinuityValidationPrimitive DatasetToDataFramePrimitive Hyperparams DuplicationValidationPrimitive Hyperparams ExtractColumnsBySemanticTypesPrimitive Hyperparams SKImputerPrimitive Params Hyperparams TimeIntervalTransformPrimitive Hyperparams TimeStampValidationPrimitive Hyperparams append_columns get_column_index_from_column_name horizontal_concat insert_columns_metadata parse_datetime_to_float parse_datetime replace_columns_metadata build_relation_graph copy_elements_metadata select_columns_metadata replace_columns get_tabular_resource_metadata select_columns remove_columns_metadata list_columns_with_structural_types get_index_columns set_table_metadata get_columns_to_use get_tabular_resource combine_columns combine_columns_metadata list_columns_with_semantic_types horizontal_concat_metadata copy_metadata remove_columns insert_columns cut_dataset append_columns_metadata AutoRegODetectorPrimitive Params Hyperparams DAGMMPrimitive Params Hyperparams DeepLogPrimitive Params DeeplogLstm Hyperparams Params EnsemblePrimitive Hyperparams KDiscordODetectorPrimitive Params Hyperparams Params LSTMODetectorPrimitive Hyperparams Params MP Hyperparams MatrixProfilePrimitive PCAODetectorPrimitive Params Hyperparams Params ABODPrimitive Hyperparams AutoEncoderPrimitive Params Hyperparams CBLOFPrimitive Params Hyperparams Params COFPrimitive Hyperparams Params Hyperparams HBOSPrimitive IsolationForestPrimitive Params Hyperparams Params Hyperparams KNNPrimitive LODAPrimitive Params Hyperparams LOFPrimitive Params Hyperparams Params Mo_GaalPrimitive Hyperparams Params OCSVMPrimitive Hyperparams SODPrimitive Params Hyperparams Params So_GaalPrimitive Hyperparams Params VariationalAutoEncoderPrimitive Hyperparams Params SystemWiseDetectionPrimitive Hyperparams Params SystemWiseDetectionPrimitive Hyperparams Params Detector Hyperparams TelemanomPrimitive Params_ODBase UnsupervisedOutlierDetectorBase2 Hyperparams_ODBase UnsupervisedOutlierDetectorBase AutoRegOD _pprint CollectiveBaseDetector CollectiveCommonTest KDiscord main LSTMOutlierDetector MultiAutoRegOD PCA T ModifyInitParams Dummy1 Dummy3 VargEstimator Dummy2 K TestBASE MyEstimator UODCommonTest get_sub_sequences_length get_sub_matrices CompressionNet DAGMM EstimationNet GMM Channel ErrorWindow Errors Model StatisticalMaximumPrimitive Params Hyperparams PrimitiveCount ACF AutoCorrelationPrimitive Hyperparams Params BKFilterPrimitive Hyperparams DiscreteCosineTransformPrimitive Hyperparams DCT FastFourierTransformPrimitive Hyperparams FFT HPFilterPrimitive Hyperparams MP Hyperparams MatrixProfilePrimitive NMF NonNegativeMatrixFactorizationPrimitive Hyperparams SKTruncatedSVDPrimitive Params PrimitiveCount Hyperparams Params Hyperparams SpectralResidualTransformPrimitive Params StatisticalAbsEnergyPrimitive Hyperparams Params StatisticalAbsSumPrimitive Hyperparams Params StatisticalGmeanPrimitive Hyperparams StatisticalHmeanPrimitive Params Hyperparams Params StatisticalKurtosisPrimitive Hyperparams StatisticalMeanPrimitive Params Hyperparams StatisticalMeanAbsPrimitive Params Hyperparams StatisticalMeanAbsTemporalDerivativePrimitive Params Hyperparams StatisticalMeanTemporalDerivativePrimitive Params Hyperparams Params StatisticalMedianPrimitive Hyperparams Params StatisticalMedianAbsoluteDeviationPrimitive Hyperparams StatisticalMinimumPrimitive Params Hyperparams Params StatisticalSkewPrimitive Hyperparams StatisticalStdPrimitive Params Hyperparams StatisticalVarPrimitive Params Hyperparams StatisticalVariationPrimitive Params Hyperparams StatisticalVecSumPrimitive Params Hyperparams StatisticalWillisonAmplitudePrimitive Params Hyperparams Params StatisticalZeroCrossingPrimitive Hyperparams TRMFPrimitive trmf Hyperparams Wavelet WaveletTransformPrimitive Hyperparams RuleBasedFilter Hyperparams _generate_data_preparation_params _rank_first_metric _generate_pipline BruteForceSearch _generate_pipelines _generate_scoring_pipeline _generate_data_preparation_pipeline BaseSKI get_default_hyperparameter EnsembleSKI CategoricalToBinarySKI ContinuityValidationSKI DuplicationValidationSKI SKImputerSKI TimeIntervalTransformSKI TimeStampValidationSKI ABODSKI AutoEncoderSKI AutoRegODetectorSKI CBLOFSKI COFSKI DeepLogSKI HBOSSKI IsolationForestSKI KDiscordODetectorSKI KNNSKI LODASKI LOFSKI LSTMODetectorSKI MatrixProfileSKI Mo_GaalSKI OCSVMSKI PCAODetectorSKI SODSKI So_GaalSKI SystemWiseDetectionSKI TelemanomSKI VariationalAutoEncoderSKI StatisticalKurtosisSKI AutoCorrelationSKI BKFilterSKI DiscreteCosineTransformSKI FastFourierTransformSKI HPFilterSKI NonNegativeMatrixFactorizationSKI SKTruncatedSVDSKI SpectralResidualTransformSKI StatisticalAbsEnergySKI StatisticalAbsSumSKI StatisticalGmeanSKI StatisticalHmeanSKI StatisticalMaximumSKI StatisticalMeanAbsTemporalDerivativeSKI StatisticalMeanAbsSKI StatisticalMeanTemporalDerivativeSKI StatisticalMeanSKI StatisticalMedianAbsoluteDeviationSKI StatisticalMedianSKI StatisticalMinimumSKI StatisticalSkewSKI StatisticalStdSKI StatisticalVariationSKI StatisticalVarSKI StatisticalVecSumSKI StatisticalWillisonAmplitudeSKI StatisticalZeroCrossingSKI TRMFSKI WaveletTransformSKI ABODSKI_TestCase AutoEncoderSKI_TestCase AutoRegODetectorSKI_TestCase CBLOFSKI_TestCase COFSKI_TestCase DeepLogSKI_TestCase HBOSSKI_TestCase IsolationForestSKI_TestCase KDiscordODetectorSKI_TestCase KNNSKI_TestCase LODASKI_TestCase LOFSKI_TestCase LSTMODetectorSKI_TestCase MatrixProfileSKI_TestCase Mo_GaalSKI_TestCase OCSVMSKI_TestCase PCAODetectorSKI_TestCase SODSKI_TestCase So_GaalSKI_TestCase SystemWiseDetectionSKI_TestCase TelemanomSKI_TestCase VariationalAutoEncoderSKI_TestCase StatisticalHmeanSKI_TestCase AutoCorrelationSKI_TestCase BKFilterSKI_TestCase DiscreteCosineTransformSKI_TestCase FastFourierTransformSKI_TestCase HPFilterSKI_TestCase NonNegativeMatrixFactorizationSKI_TestCase SKTruncatedSVDSKI_TestCase SpectralResidualTransformSKI_TestCase StatisticalAbsEnergySKI_TestCase StatisticalAbsSumSKI_TestCase StatisticalGmeanSKI_TestCase StatisticalKurtosisSKI_TestCase StatisticalMaximumSKI_TestCase StatisticalMeanSKI_TestCase StatisticalMeanAbsSKI_TestCase StatisticalMeanAbsTemporalDerivativeSKI_TestCase StatisticalMeanTemporalDerivativeSKI_TestCase StatisticalMedianSKI_TestCase StatisticalMedianAbsoluteDeviationSKI_TestCase StatisticalMinimumSKI_TestCase StatisticalSkewSKI_TestCase StatisticalStdSKI_TestCase StatisticalVarSKI_TestCase StatisticalVariationSKI_TestCase StatisticalVecSumSKI_TestCase StatisticalWillisonAmplitudeSKI_TestCase StatisticalZeroCrossingSKI_TestCase TRMFSKI_TestCase WaveletTransformSKI_TestCase HoltSmoothingSKI HoltWintersExponentialSmoothingSKI MovingAverageTransformerSKI SimpleExponentialSmoothingSKI SKAxiswiseScalerSKI SKPowerTransformerSKI SKQuantileTransformerSKI SKStandardScalerSKI SubsequenceSegmentationSKI TimeSeriesSeasonalityTrendDecompositionSKI generate_3D_data load_sys_data generate_sys_feature get_dataframe convert_metadata convert_through_json normalize_semantic_types effective_metadata load_iris_metadata test_iris_metadata CSVReaderPrimitiveTestCase DenormalizePrimitiveTestCase FixedSplitDatasetSplitPrimitiveTestCase KFoldDatasetSplitPrimitiveTestCase KFoldTimeSeriesSplitPrimitiveTestCase NoSplitDatasetSplitPrimitiveTestCase RedactColumnsPrimitiveTestCase TrainScoreDatasetSplitPrimitiveTestCase CategoricalBinaryTestCase ColumnFilterTest ColumnParserPrimitiveTestCase ConstructPredictionsPrimitiveTestCase ContinuityValidationTest ColumnParserPrimitiveTestCase DuplicationValidationTest ExtractColumnsBySemanticTypePrimitiveTestCase SkImputerTestCase TimeIntervalTransformTestCase TimeStampValidationTestCase get_dataframe convert_metadata convert_through_json normalize_semantic_types effective_metadata load_iris_metadata test_iris_metadata AutoRegODetectTestCase DAGMMTest DeepLogTest KDiscordODetectTestCase LSTMODTestCase MatrixProfileTest KDiscordODetectTestCase ABODTest PyodAECase PyodLOFTestCase COFTest HBOSTest PyodIsolationForestTestCase PyodKNNTestCase PyodLODATestCase PyodLOFTestCase PyodSoGaalTestCase PyodOCSVMTestCase SODTest PyodSoGaalTestCase PyodAVECase TelemanomTest get_dataframe convert_metadata convert_through_json normalize_semantic_types effective_metadata load_iris_metadata test_iris_metadata StatisticalGmeanTestCase AutoCorrelationTestCase BKFilterTest DctTestCase FftTestCase HPFilterTest MatrixProfileTest NmfTestCase SKTruncatedSVDTest SpectralResidualTransformTestCase StatisticalStdTestCase StatisticalAbsEnergyTestCase StatisticalAbsSumTestCase StatisticalHmeanTestCase StatisticalKurtosisTestCase StatisticalMaximumTestCase StatisticalMeanTestCase StatisticalMeanAbsTestCase StatisticalMeanAbsTemporalDerivativeTestCase StatisticalMeanTemporalDerivativeTestCase StatisticalMedianTestCase StatisticalMedianAbsoluteDeviationTestCase StatisticalMinimumTestCase StatisticalSkewTestCase StatisticalVarTestCase StatisticalVariationTestCase StatisticalVecSumTestCase StatisticalWillisonAmplitudeTestCase StatisticalZeroCrossingTestCase TRMFTest WaveletTransformerTestCase get_dataframe convert_metadata convert_through_json normalize_semantic_types effective_metadata load_iris_metadata test_iris_metadata ABODSKI_TestCase AutoEncoderSKI_TestCase AutoRegODetectorSKI_TestCase CBLOFSKI_TestCase COFSKI_TestCase DeepLogSKI_TestCase HBOSSKI_TestCase IsolationForestSKI_TestCase KDiscordODetectorSKI_TestCase KNNSKI_TestCase LODASKI_TestCase LOFSKI_TestCase LSTMODetectorSKI_TestCase MatrixProfileSKI_TestCase Mo_GaalSKI_TestCase OCSVMSKI_TestCase PCAODetectorSKI_TestCase SODSKI_TestCase So_GaalSKI_TestCase TelemanomSKI_TestCase VariationalAutoEncoderSKI_TestCase StatisticalHmeanSKI_TestCase AutoCorrelationSKI_TestCase BKFilterSKI_TestCase DiscreteCosineTransformSKI_TestCase FastFourierTransformSKI_TestCase HPFilterSKI_TestCase NonNegativeMatrixFactorizationSKI_TestCase SKTruncatedSVDSKI_TestCase SpectralResidualTransformSKI_TestCase StatisticalAbsEnergySKI_TestCase StatisticalAbsSumSKI_TestCase StatisticalGmeanSKI_TestCase StatisticalKurtosisSKI_TestCase StatisticalMaximumSKI_TestCase StatisticalMeanSKI_TestCase StatisticalMeanAbsSKI_TestCase StatisticalMeanAbsTemporalDerivativeSKI_TestCase StatisticalMeanTemporalDerivativeSKI_TestCase StatisticalMedianSKI_TestCase StatisticalMedianAbsoluteDeviationSKI_TestCase StatisticalMinimumSKI_TestCase StatisticalSkewSKI_TestCase StatisticalStdSKI_TestCase StatisticalVarSKI_TestCase StatisticalVariationSKI_TestCase StatisticalVecSumSKI_TestCase StatisticalWillisonAmplitudeSKI_TestCase StatisticalZeroCrossingSKI_TestCase TRMFSKI_TestCase WaveletTransformSKI_TestCase generate_3D_data load_sys_data generate_sys_feature HoltSmoothingTestCase HoltSmoothingTestCase MovingAverageTransformTestCase SimpleExponentialSmoothingTestCase SKStandardizationTestCase SKPowerTransformerTestCase SKQuantileTransformerTestCase SKStandardizationTestCase SubsequenceSegmentationTest TimeSeriesSeasonalityTrendDecompositionTestCase get_dataframe convert_metadata convert_through_json normalize_semantic_types effective_metadata load_iris_metadata test_iris_metadata HoltSmoothingPrimitive Params Hyperparams Params HoltWintersExponentialSmoothingPrimitive Hyperparams MovingAverageTransformerPrimitive Params Hyperparams Params SimpleExponentialSmoothingPrimitive Hyperparams SKAxiswiseScalerPrimitive Scaler Hyperparams Params SKPowerTransformerPrimitive Hyperparams Params SKQuantileTransformerPrimitive Hyperparams SKStandardScalerPrimitive Params Hyperparams SubsequenceSegmentationPrimitive Hyperparams Params Hyperparams TimeSeriesSeasonalityTrendDecompositionPrimitive dict list replace set tqdm md5 join basename urlretrieve print check_integrity expanduser makedirs expanduser expanduser get join _save_response_content _quota_exceeded print _get_confirm_token expanduser Session makedirs items list startswith _is_tarxz _is_zip join remove _is_tar dirname _is_gzip join basename format print download_url expanduser format format print endswith append sep split get format print add set get format print set join format print dirname read_csv update items list format join print set dirname get_file_extension walk join groupby format set_index print dirname append read_csv print format get list format print set any append get_file_extension keys print format validate_dataset validate_dataset_path get format print endswith validate_keywords validate_metrics append sep split join list format value_counts print dirname keys read_csv get join list format print set dirname keys read_csv len get join dirname read_csv print get_all_columns format deepcopy get_common_funny_recursive DeepDirCmp remove format get_right_only_recursive print dirname get_diff_files_recursive get_left_only_recursive print format enumerate get canonical_dataset_description list defaultdict format print endswith append values join walk abspath add_argument print directories exit parse_args handler ArgumentParser configure_parser connect get_scoring_pipeline get_splitting_pipeline append read_csv generate_dataset_problem add_output add_input add_step add_argument PrimitiveStep append add_hyperparameter Pipeline evaluate_pipeline format generate_metrics print generate_dataset_problems id generate_data_preparation_pipeline SimpleRunner generate_scoring_pipeline enumerate generate_data_preparation_params generate_pipelines load_pipeline import_input_data generate_problem_description SimpleRunner get_scoring_pipeline generate_problem get_splitting_pipeline DataFrame PrimitiveMetadata UniformBool PrimitiveMetadata Set PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata Enumeration PrimitiveMetadata Uniform PrimitiveMetadata parse_datetime PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata Union PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata List PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata Hyperparameter PrimitiveMetadata PrimitiveMetadata Set Enumeration Uniform UniformBool items list sorted join set_printoptions get_printoptions enumerate append len threshold_ LSTMOutlierDetector print reshape predict fit list asarray get_sub_sequences_length astype flatten append zeros float range enumerate int floor PrimitiveMetadata UniformInt PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata Choice PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata dict PrimitiveMetadata scores get_scoring_pipeline get_splitting_pipeline add_output add_input str uuid4 created add_step add_argument PrimitiveStep append add_hyperparameter Pipeline product list get_hyperparams replace set keys defaults generate_data range append append join read_csv values concatenate load join format dirname abspath query assertEqual range convert_metadata sorted isinstance normalize_semantic_types to_json_structure value get_hyperparams DatasetToDataFramePrimitive PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata PrimitiveMetadata | # TODS: Automated Time-series Outlier Detection System <img width="500" src="./docs/img/tods_logo.png" alt="Logo" /> [](https://github.com/datamllab/tods/actions) [](https://codecov.io/gh/datamllab/tods) [中文文档](README.zh-CN.md) TODS is a full-stack automated machine learning system for outlier detection on multivariate time-series data. TODS provides exhaustive modules for building machine learning-based outlier detection systems, including: data processing, time series processing, feature analysis (extraction), detection algorithms, and reinforcement module. The functionalities provided via these modules include data preprocessing for general purposes, time series data smoothing/transformation, extracting features from time/frequency domains, various detection algorithms, and involving human expertise to calibrate the system. Three common outlier detection scenarios on time-series data can be performed: point-wise detection (time points as outliers), pattern-wise detection (subsequences as outliers), and system-wise detection (sets of time series as outliers), and a wide-range of corresponding algorithms are provided in TODS. This package is developed by [DATA Lab @ Rice University](https://cs.rice.edu/~xh37/index.html). TODS is featured for: * **Full Stack Machine Learning System** which supports exhaustive components from preprocessings, feature extraction, detection algorithms and also human-in-the loop interface. * **Wide-range of Algorithms**, including all of the point-wise detection algorithms supported by [PyOD](https://github.com/yzhao062/pyod), state-of-the-art pattern-wise (collective) detection algorithms such as [DeepLog](https://www.cs.utah.edu/~lifeifei/papers/deeplog.pdf), [Telemanon](https://arxiv.org/pdf/1802.04431.pdf), and also various ensemble algorithms for performing system-wise detection. * **Automated Machine Learning** aims to provide knowledge-free process that construct optimal pipeline based on the given data by automatically searching the best combination from all of the existing modules. | 1,844 |
datang1992/Correlated-VAEs | ['link prediction'] | ['Correlated Variational Auto-Encoders'] | src/process_Epinions_data.py src/process_tree_data.py src/cvae_ind.py src/cvae_corr.py inference_network2 get_np_edge2 get_ncrr inference_network load_data get_e main train get_icrr generative_network weight_variable get_ncrr inference_network load_data get_e main train get_icrr generative_network weight_variable get_edge_weight my_relu truncated_normal append range load int int64 int32 get_e round softplus matmul relu weight_variable relu reshape concat matmul sigmoid weight_variable matmul relu weight_variable float64 astype int64 range len zeros max range copy randint batch_size tuple vstack gather get_icrr log_prob log run get_ncrr batch_size4 batch_size3 data_dir transpose reduce_sum get_np_edge2 tau batch_size2 append range format inf hstack n_iterations gamma zeros InteractiveSession time minimize print min AdamOptimizer reduce_mean load_data global_variables_initializer train reshape transpose hstack dot sum pop tolist set pinv append zeros range len | # Correlate-Variational-Auto-Encoders Code for my ICML 2019 paper [Correlated Variational Auto-Encoders](https://arxiv.org/abs/1905.05335) ## Files - **cvae_ind.py**: Code for the algorithm CVAE<sub>ind</sub> on general graphs (Section 4.2.3). - **cvae_corr.py**: Code for the algorithm CVAE<sub>corr</sub> on general graphs (Section 4.2.3). - **process_tree_data.py**: Code for constructing the synthetic dataset for the spectral clustering experiment (Section 4.2.2). - **process_Epinions_data.py**: Code for preprocessing the Epinions dataset for the general graph link prediction experiment (Section 4.2.3). To use this code, construct an NumPy npz file that contains two arrays with values from the two datasets (ratings_data and trust_data) on the [Epinions dataset website](http://www.trustlet.org/downloaded_epinions.html) [1] and run this code with the argument *input_data_file_name* being set as the npz file directory. --- ### References [1] Trust-aware recommender systems. P Massa, P Avesani. Proceedings of the 2007 ACM conference on Recommender systems, 17-24 | 1,845 |
datienguyen/cnn_coherence | ['text generation'] | ['A Neural Local Coherence Model'] | utilities/data_stats.py utilities/my_callbacks.py utilities/process_duc2003.py utilities/new_data_helper.py cnn_coherence.py utilities/data_helper.py ranking_loss remove_entity load_POS_EGrid load_all numberize_sentences load_NEG_EGrid load_and_numberize_Egrid_with_Feats get_eTrans get_eTrans_With_Perm load_embeddings get_eTrans_with_Feats load_summary_data adjust_index load_and_numberize_Egrid find_doc_size find_len do_stats sent_stats find_len update_dict get_eTrans_with_Word load_POS_EGrid load_all numberize_sentences init_vocab load_NEG_EGrid load_and_numberize_Egrid_with_Feats get_eTrans_With_Perm load_embeddings get_eTrans_with_Feats load_summary_data adjust_index load_and_numberize_Egrid load_and_numberize_egrids getString maximum find_len split count split len count split str count split append len str count split append len numberize_sentences pad_sequences get_eTrans_with_Feats adjust_index range enumerate len numberize_sentences print pad_sequences get_eTrans_With_Perm adjust_index range enumerate len str sorted print len Counter get_eTrans_with_Feats dict append keys enumerate split numberize_sentences print pad_sequences len get_eTrans_with_Feats uniform adjust_index append range enumerate numberize_sentences print pad_sequences get_eTrans_with_Feats uniform adjust_index append range enumerate len numberize_sentences get_eTrans adjust_index append range len seed uniform append enumerate split append len strip count split str print close sent_stats update_dict append find_len range len items list sorted print append len seed get_eTrans_with_Word numberize_sentences print pad_sequences len uniform adjust_index append range enumerate append len count split str | # cnn_coherence Authors: ndat & shafiq Regard to WSJ license, we only provide entity grid files extracted using BrownCoherence toolkit. Citation -------- If you use the entity grid files (including permutations) and the code, please refer to our [ACL 2017](http://aclweb.org/anthology/P17-1121) paper. @inproceedings{tiennguyen2017, author = {Tien Nguyen, Dat and Joty, Shafiq}, title = {A Neural Local Coherence Model}, booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics | 1,846 |
david-amirault/amhs | ['traffic prediction'] | ['Incrementally Improving Graph WaveNet Performance on Traffic Prediction'] | train.py generate_training_data.py test.py util.py engine.py gen_adj_mx.py regression.py model.py exp_results.py Trainer plot_loss_curve loss_curve make_results_table summary generate_train_val_test generate_graph_seq2seq_io_data get_adjacency_matrix nconv GraphConvNet GWNet main partition main plot_learned_adj_matrix main eval_ make_graph_inputs load_pickle calculate_scaled_laplacian calc_metrics mask_and_fillna calculate_normalized_laplacian load_adj calc_tstep_metrics _to_ser make_pred_df asym_adj DataLoader sym_adj load_dataset StandardScaler get_shared_arg_parser idxmin min mean round read_csv read_csv idxmin plot print loss_curve min axhline iterrows columns concatenate abs transpose min astype range shape tile append expand_dims dayofweek max timedelta64 values arange read_hdf output_dir round fillna savez_compressed transpose traffic_df_filename append range concatenate generate_graph_seq2seq_io_data stride join subdatasets print sort reshape y_start exp inf len flatten zeros std values enumerate int data batch_size calc_tstep_metrics device seq_length from_args partition OLS label_path load_state_dict load_dataset append to range predict make_graph_inputs mean eval checkpoint load print absolute numpy fit make_pred_df plot_learned_adj_matrix to_csv plotheatmap relu DataFrame nodevec2 nodevec1 softmax numpy heatmap savefig mm max model save DataFrame list transpose progress_bar step state_dict eval_ get_iterator shuffle valid_loss enumerate join load_checkpoint Series dict summary train epochs time transpose get_iterator eval append diags flatten coo_matrix sum array flatten coo_matrix diags diags tocoo flatten coo_matrix eye sum array calculate_normalized_laplacian csr_matrix reduce identity shape eigsh load_pickle load join DataLoader transform StandardScaler zeros_like where isnan sqrt float abs isnan zeros_like where clamp squeeze transpose get_iterator rename_axis eval append inverse_transform range enumerate dict DataFrame adjdata adjtype aptonly load_adj add_argument ArgumentParser | # amhs This repository includes models, scripts, and utilities related to the automated material handling system (AMHS) of a modern semiconductor Fab. The pipeline in this repository allows the user to simulate traffic in an AMHS, create datasets based on the simulated traffic, and train models to on these created datasets. This repository includes the following: - `archive/`: depricated simulations. Ignore. - `fig/`: schematic of the original Graph WaveNet architecture. - `reference/`: `.txt` documents explaining acronyms, open-ended research questions, and the software requirements for this dataset. The requirements may be installed using Python 3 Pip via `python3 -m pip install <package name>`. - `AMHS_realistic_sim.ipynb`: the Jupyter notebook that simulates traffic in an AMHS. This notebook includes utilities for creating visualizations of the traffic that is simulated in the AMHS. - `AMHS_preprocess.ipynb`: dataset preprocessing utilities. This Jupyter notebook serves as the intermediary between `AMHS_realistic_sim.ipynb` and the Graph WaveNet codebase upon which the models in this repository are built. - `gen_adj_mx.py`: utilities for Graph WaveNet adjacency matrix creation. - `generate_training_data.py`: utilities to create Graph WaveNet datasets for training in PyTorch. | 1,847 |
david-cortes/outliertree | ['outlier detection'] | ['Explainable outlier detection through decision tree conditioning'] | setup.py docs/source/conf.py outliertree/__init__.py set_omp_false build_ext_subclass OutlierTree | # OutlierTree Explainable outlier/anomaly detection based on smart decision tree grouping, similar in spirit to the GritBot software developed by RuleQuest research. Written in C++ with interfaces for R and Python (additional Ruby wrapper can be found [here](https://github.com/ankane/outliertree/)). Supports columns of types numeric, categorical, binary/boolean, and ordinal, and can handle missing values in all of them. Ideal as a sanity checker in exploratory data analysis. # Example outputs Example outliers from the [hypothyroid dataset](http://archive.ics.uci.edu/ml/datasets/thyroid+disease): ``` row [1138] - suspicious column: [age] - suspicious value: [75.00] distribution: 95.122% <= 42.00 - [mean: 31.46] - [sd: 5.28] - [norm. obs: 39] given: [pregnant] = [TRUE] row [2230] - suspicious column: [T3] - suspicious value: [10.60] | 1,848 |
david-siqi-liu/yelp-sentiment-analysis | ['sentiment analysis', 'word embeddings'] | ['Sentiment Analysis of Yelp Reviews: A Comparison of Techniques and Models'] | yelpsent/metrics/__init__.py yelpsent/visualization/__init__.py test_environment.py yelpsent/features/__init__.py yelpsent/models/predict_model.py yelpsent/models/train_model.py yelpsent/__init__.py yelpsent/data/__init__.py setup.py yelpsent/metrics/classification.py yelpsent/models/__init__.py yelpsent/features/vectorizer.py yelpsent/data/load_dataset.py yelpsent/visualization/visualize.py main sample_dataset load_dataset YelpSentCountVectorizer f1_score nb_predict_proba get_examples most_frequent_words evaluate_pipeline train_and_test confusion_heat_map print major Path groupby reset_index min float len recall_score precision_score zip len get_feature_names format exp print transpose len transform range prod enumerate append class_log_prior_ print transpose get_feature_names zip f1_score predict confusion_heat_map evaluate_pipeline Pipeline fit show xlabel confusion_matrix ylabel figure heatmap | Yelp Reviews Sentiment Analysis ============================== https://arxiv.org/abs/2004.13851 - 350,000 Yelp reviews on 5,000 restaurants - Ablation study on text preprocessing techniques - For machine learning models, we find that using binary bag-of-word representation, adding bi-grams, imposing minimum frequency constraints and normalizing texts have positive effects on model performance - For deep learning models, we find that using pre-trained word embeddings and capping maximum length often boost model performance - Using macro F1 score as our comparison metric, we find simpler models such as Logistic Regression and Support Vector Machine to be more effective at predicting sentiments than more complex models such as Gradient Boosting, LSTM and BERT | 1,849 |
davidar/bib | ['time series'] | ['Structure Discovery in Nonparametric Regression through Compositional Kernel Search'] | util.py unicode_to_latex.py write_bib resolve_bib resolve_url dl_pdf random_sleep parse_bib parse_xml_flat merge_dict str_norm BibWriter bib_search u2latex isurl streq_norm FieldDict OrderedDict Parser BibWriter write_stream Popen stdout Popen Popen stdout Popen items list parse getroot urlparse strip uniform sleep | # bib ## Quick Start Create a `seed.bib` like the following: ``` @article{ teh.jordan:hdp, doi = {10.1198/016214506000000302}, oai = {eprints.ucl.ac.uk.OAI2:13457}, pdf = {http://www.stat.berkeley.edu/~jordan/653.pdf}, keywords = {bnp} | 1,850 |
davidbau/ganseeing | ['semantic segmentation'] | ['Seeing What a GAN Cannot Generate'] | seeing/train_hybrid_bottom.py seeing/upsegmodel/prroi_pool/test_prroi_pooling2d.py seeing/sampler.py seeing/optimize_residuals.py seeing/pbar.py seeing/tally.py seeing/upsample.py seeing/upsegmodel/prroi_pool/build.py seeing/setting.py seeing/proggan_ablation.py seeing/parallelfolder.py seeing/zdataset.py notebooks/ipynb_drop_output.py seeing/proggan.py seeing/invert.py seeing/frechet_distance.py seeing/pidfile.py seeing/optimize_z_lbfgs.py seeing/segviz.py seeing/train_multilayer_inv.py seeing/encoder_loss.py seeing/upsegmodel/__init__.py seeing/encoder_net.py seeing/runningstats.py seeing/fsd.py seeing/autoeval.py seeing/upsegmodel/prroi_pool/prroi_pool.py seeing/show.py seeing/samplegan.py seeing/upsegmodel/resnext.py seeing/LBFGS.py seeing/upsegmodel/prroi_pool/functional.py seeing/upsegmodel/prroi_pool/__init__.py seeing/customnet.py seeing/renormalize.py seeing/upsegmodel/models.py seeing/nethook.py seeing/workerpool.py seeing/upsegmodel/resnet.py seeing/train_onelayer_inv.py seeing/segmenter.py seeing/train_hybrid_inv.py seeing/imgviz.py seeing/make_z_dataset.py strip_output_from_cell autoimport_eval CustomAlexNet Vectorize CustomResNet GlobalAveragePool2d cor_distance cor_square_error add_adjustment make_over5_resnet HybridLayerNormEncoder FixedGANPriorGenerator Layer1toZNormEncoder SkipAdjustedSequence PixelNormLayer ResidualGenerator BaselineTunedDirectGenerator LayerNormEncoder calculate_activation_statistics calculate_frechet_distance sample_frechet_distance cached_tally_directory tally_generated_objects diff_figure main tally_dataset_objects tally_directory strip_image_from_grid_row ImageVisualizer border_from_mask infinite_sampler refine_z_lbfgs train_inv_layer layers_after train_inverse last_gen_layername train_inv_joint epoch_grouper split_gen_layers MeanStats test_sampler cor_square_error LBFGS FullBatchLBFGS is_legal polyinterp refine_z_lbfgs target_filename_from_source SaveNpyWorker make_matching_tensor set_requires_grad apply_ablation_replacement InstrumentedModel subsequence set_requires_grad delete_log save_checkpoint edit save_tensor_image main log_progress visualize_results set_requires_grad delete_log save_checkpoint save_tensor_image main log_progress visualize_results is_image_file grayscale_loader ndarray make_parallel_dataset ParallelImageFolders default_loader is_npy_file walk_image_files print VerboseContextManager reporthook post tqdm_terminal innermost_tqdm in_notebook descnext CallableModule __call__ desc mark_job_done exit_if_job_done delete_pidfile pidfile_taken from_old_pt_dict DoubleResolutionLayer from_tf_parameters NormConvBlock sizes_from_state_dict from_state_dict ProgressiveGenerator PixelNormLayer from_pth_file NormUpscaleConvBlock print_network state_dict_from_old_pt_dict OutputConvBlock WScaleLayer state_dict_from_tf_parameters print_network NormConvBlock G128_pixelwisenorm Generator128 Generator16 split_model PixelNormLayer G128_simple Generator32 Generator64 Generator1024 WScaleLayer G128_equallr G128_minibatch_disc NormSimpleBlock Generator8 NormUpscaleSimpleBlock Generator256 NormUpscaleConvBlock as_image renormalizer find_normalizer Renormalizer as_url from_url as_tensor from_image RunningTopK RunningCovariance RunningConditionalTopK RunningQuantile RunningVariance RunningCrossCovariance progress_addbmm RunningConditionalVariance sample_portion RunningBincount RunningConditionalQuantile GatherTensor save_znum_images main copy_lightbox_to SaveImageWorker coordinate_sample FixedSubsetSampler test FixedRandomSubsetSampler main UnifiedParsingSegmenter load_unified_parsing_segmentation_model ensure_upp_segmenter_downloaded BaseSegmenter test_main segment_key swatch_image seg_as_image segment_visualization load_proggan load_proggan_inversion load_test_image load_dataset load_proggan_ablation pil_to_b64 pil_to_html blocks_tags blocks rows_tags show html rows reset pil_to_url CallableModule a flush batch_bincount gather_topk load_cached_state intersection_from_conditional_quantile tally_conditional_topk information_quality_ratio intersection_over_union conditional_samples tally_bincount mi_from_conditional_quantile iou_from_conditional_quantile call_compute tally_mean tally_topk tally_conditional_mean tally_covariance tally_conditional_quantile tally_cat tally_quantile iou_from_conditional_indicator_mean make_loader tally_cross_covariance iqr_from_conditional_quantile mutual_information joint_entropy save_cached_state encoder_loss IdentityLayer testing_loader generate_and_recover_features monitor_losses epoch_grouper set_requires_grad training_loader save_checkpoint save_tensor_image main visualize_results encoder_loss IdentityLayer old_encoder_loss testing_loader generate_and_recover_features monitor_losses epoch_grouper set_requires_grad training_loader save_checkpoint generate_recovered save_tensor_image main visualize_results old_monitor_losses encoder_decoder_loss testing_loader set_requires_grad epoch_grouper training_loader save_checkpoint save_tensor_image main visualize_results encoder_loss IdentityLayer testing_loader set_requires_grad monitor_losses epoch_grouper training_loader save_checkpoint save_tensor_image main visualize_results sequence_data_size upsample_grid convconfig_data_size sequence_scale_offset convconfig_scale_offset convconfigs find_sizer upsampler image_size_from_source WorkerBase early_terminate_pools WorkerPool z_sample_for_model standard_z_sample z_dataset_for_model SegmentationModule Resnet SegmentationModuleBase conv3x3 conv3x3_bn_relu ModelBuilder UPerNet ResNet resnet50 Bottleneck load_url conv3x3 resnet101 BasicBlock GroupBottleneck ResNeXt load_url conv3x3 resnext101 PrRoIPool2DFunction PrRoIPool2D TestPrRoIPool2D CustomResNet getattr atleast_2d print iscomplexobj atleast_1d dot sqrtm trace eye real abs max imag mean cov print add_argument exit stderr savefig ArgumentParser sample_frechet_distance histout parse_args print_usage join replace save isfile tally_directory makedirs get_label_and_category_names UnifiedParsingSegmenter DataLoader ParallelImageFolders zeros cuda get_label_and_category_names UnifiedParsingSegmenter DataLoader zeros cuda z_dataset_for_model get_label_and_category_names UnifiedParsingSegmenter DataLoader zeros cuda subplots grid UnifiedParsingSegmenter Figure set_yscale get_label_and_category_names bar legend append plot set_xticklabels set_xlim tight_layout item float FigureCanvas enumerate text set_yticks argsort set_xticks set_ylabel set_ylim len full enumerate zeros_like set_requires_grad clone FullBatchLBFGS items list subsequence split_gen_layers index update join infinite_sampler train_inverse set_requires_grad split_gen_layers test_sampler getattr device infinite_sampler join test_sampler device load register_modules Adam MultiStepLR parameters CheckpointIO next islice iter arange polyval max roots solve real append sum range Inf plot astype sqrt minimum min isreal maximum figure zeros array len split F make_matching_tensor get view tuple from_numpy shape to array len OrderedDict items list parameters Module isinstance model HybridLayerNormEncoder zero_grad l1_loss delete_log MultiStepLR save_checkpoint features pbar cuda vgg16 image_source values Adam OrderedDict ResidualGenerator load_state_dict F range load_test_image eval VF H join items load_proggan backward set_requires_grad makedirs InstrumentedModel named_parameters dict step mse_loss visualize_results subsequence join remove print join savez save makedirs str join isinstance copy dirname save_tensor_image makedirs numpy save Parameter isintance clone set_grad_enabled FullBatchLBFGS snapshot_every clone E endswith join sorted dirname isfile append pbar walk items sorted list tuple set OrderedDict dict append walk_image_files enumerate innermost_tqdm set_postfix str set_description innermost_tqdm join write python_print __name__ dict __call__ update dict join remove print pidfile_taken exit isfile fdopen O_CREAT O_EXCL makedirs write dirname fsync O_RDWR register flush open unlink close print parameters format ProgressiveGenerator state_dict_from_old_pt_dict sizes_from_state_dict load_state_dict ProgressiveGenerator sizes_from_state_dict state_dict_from_tf_parameters load_state_dict ProgressiveGenerator state_dict_from_old_pt_dict sizes_from_state_dict load_state_dict append count torch_from_tf dict permute append flip count append count str list Sequential output add_module features enumerate renormalizer renormalizer as_image BytesIO save decode renormalizer convert to_tensor resize b64decode BytesIO sub open find_normalizer isinstance getattr reversed isinstance list pbar addbmm_ range zeros bernoulli children arange tuple in_features outdir DataParallel view pthfile shape autoimport_eval quiet next save_znum_images size in_channels standard_z_sample load copy_lightbox_to isinstance join makedirs join getcwd copy realpath dirname RandomState ravel_multi_index astype stack uniform unravel_index zeros enumerate len exists list basename add copy set FixedRandomSubsetSampler indir ParallelImageFolders number_filename dereference coordinate_sample list class_subset FixedRandomSubsetSampler assert_almost_equal range SegmentationModule build_encoder ModelBuilder eval sum build_decoder join urlretrieve print isfile makedirs predict_single_class view print segment_batch get_label_and_category_names UnifiedParsingSegmenter item bincount cuda open cpu flip append get_label_and_category_names zoom reshape flatten argsort bincount zeros load_state_dict_from_url from_state_dict load_state_dict_from_url model_classname load_state_dict load_state_dict_from_url eval model_classname load_state_dict join download_and_extract_archive load_dataset items list blocks_tags isinstance extend append pil_to_html str hasattr isinstance Image _repr_html_ extend escape append enumerate BytesIO save append flush blocks display blocks display flush bincount contiguous bincount view len size keys mean zeros max quantiles sorted normalize view size statistic conditional shape prog depth float keys zeros sum log range range log joint_entropy mutual_information isinstance list FixedSubsetSampler isinstance print TensorDataset Tensor range len load items print items savez makedirs dirname state_dict encoder_loss testing_loader Sequential training_loader post getattr append lr manual_seed enumerate retain_layers epoch_grouper parameters resnet bottom savez generate_and_recover_features min len OrderedDict sum range generator encoder retained_features generate_and_recover_features mse_loss cor_square_error DataLoader z_dataset_for_model DataLoader z_dataset_for_model recover generate_recovered generator encoder mse_loss cor_square_error generator encoder generator encoder_decoder_loss make_over5_resnet decoder encoder encoder mse_loss decoder Layer1toZNormEncoder LayerNormEncoder IdentityLayer r_maker r_decoder s_maker encoder r_maker s_maker encoder r_maker r_decoder s_maker sequence_data_size image_size_from_source upsample_grid sequence_scale_offset append tuple list zip tuple expand size resolution hasattr find_sizer getattr reversed isinstance early_terminate list items hasattr isinstance view in_features standard_z_sample to float RandomState load_url ResNet load_state_dict load_url ResNet load_state_dict join format urlretrieve write makedirs load_url load_state_dict ResNeXt | Seeing What a GAN Cannot Generate ================================= State-of-the art GANs can create increasingly realistic images, yet they are not perfect. What is a GAN *unable* to generate? This repository contains the code for the ICCV 2019 paper [Seeing What a GAN Cannot Generate]( http://ganseeing.csail.mit.edu/papers/seeing.pdf), which introduces a framework that can be used to answer this question.  |  | 1,851 |
davide-belli/generative-graph-transformer | ['graph generation', 'semantic segmentation'] | ['Image-Conditioned Graph Generation for Road Network Extraction'] | metrics/streetmover_distance.py models/models_encoder.py utils/configs.py utils/utils.py utils/dataset.py models/models_decoder.py pretrain_encoder.py metrics/statistics.py models/layers_GGT.py main.py arguments.py get_parser set_default_args load_decoder load_encoder test get_epoch_fn epoch_test epoch_test_MLP epoch_train_MLP train epoch_test_GraphRNN epoch_train main run_epoch get_accuracy_A get_BCE_adj get_degree_hist compute_statistics get_diameters get_clustering_hist get_MSE_coord compute_statistics_MLP get_cumulative_distance show_assignments get_points StreetMoverDistance euclidean_distance SinkhornDistance MultiHeadAttention FeedForward generate_mask_pad get_clones generate_mask_sequence attention PositionalEncoder DecoderLayer DecoderGRUAtt DecoderGraphRNN init_weights DecoderMLP DecoderGGT DecoderGraphRNNAtt DecoderGRU CNNEncoderSimple CNNEncoderAtt CNNEncoderSimpleForContextAttention CNNDecoderSimple Configs custom_collate_fn_with_coordinates ToulouseRoadNetworkDataset denormalize load_images load_dataset normalize load_raw_images custom_collate_fn save_cnn_plots save_statistics plot_point_cloud print_and_log generate_input_sequence clear_log visualize_attention_sequence plot_output_graph visualize_attention_image sample_sigmoid full_frame_high_res save_losses decode_adj ensure_paths full_frame ensure_dir generate_mask_sequence plot_histogram_streetmover update_writer add_argument ArgumentParser batch_size weight_decay losses_path device is_test file_logs max clear_log lr_rate seed checkpoints_base encoder notes file_name format lamb ensure_paths plots_path logs_path ensure_dir tensorboard_path decoder min load checkpoints_path list CNNEncoderAtt Adam device parameters eval load_state_dict all_history CNNEncoderSimple is_test to load checkpoints_path list to DecoderGRUAtt DecoderGraphRNN Adam parameters eval DecoderMLP load_state_dict device DecoderGGT is_test DecoderGraphRNNAtt DecoderGRU is_test reset_hidden zero_grad unsqueeze device max_n_nodes generate_input_sequence plot_output_graph list all_history append encoder to sum range lamb pack_padded_sequence eval plots_path item generate_mask_sequence enumerate decoder backward train step len zero_grad numpy device plot_output_graph list append encoder to sum range ones_like lamb decode_adj eval plots_path item enumerate decoder backward train step len plot_histogram_streetmover dump std absolute mean max_n_nodes generate_mask_sequence array open plot_histogram_streetmover dump std absolute mean max_n_nodes generate_mask_sequence array open plot_histogram_streetmover dump std absolute mean array open load_decoder DataLoader print_and_log save file_logs seed checkpoints_path ToulouseRoadNetworkDataset MSELoss file_tensorboard run_epoch range state_dict SummaryWriter format save_losses manual_seed BCELoss update_writer time load_encoder print get_epoch_fn epochs len seed save_statistics format load_decoder load_encoder print ToulouseRoadNetworkDataset DataLoader get_epoch_fn manual_seed run_epoch len save_cnn_plots decoder criterion backward print to zero_grad mean eval append encoder train step enumerate detach batch_size dataset_path ImageFolder DataLoader save device lr_rate seed list Adam MSELoss to run_epoch range state_dict format Compose manual_seed item context_attention ensure_dir max_time time print parameters epochs len sum int wasserstein_distance get_accuracy_A get_BCE_adj streetmover_distance get_degree_hist decode_adj get_diameters get_MSE_coord item from_numpy_matrix numpy int wasserstein_distance get_accuracy_A streetmover_distance get_degree_hist get_diameters from_numpy_matrix sum list histogram values degree_histogram append diameter connected_component_subgraphs list to max view criterion_bce FloatTensor zeros float sum max to max criterion_mse view range append sqrt item euclidean_distance arrow axis tight_layout title scatter savefig clf legend max range dropout print transpose matmul sqrt unsqueeze masked_fill softmax ones triu zeros range xavier_uniform_ weight fill_ list LongTensor FloatTensor shuffle append keys cat len append FloatTensor int format convert append to_tensor enumerate pad_sequence list sort zip pad_sequence list sort zip repeat cat tril float to device T zeros max range tril checkpoints_path statistics_path dataset_path losses_path plots_path logs_path ensure_dir tensorboard_path makedirs add_scalar print file_name list statistics_path join file_name join losses_path axes set_visible figure autoscale axes set_visible figure autoscale save_image format make_grid list plot zip min close set add ylim clf savefig full_frame xlim range show list plot min close full_frame_high_res ylim clf zip xlim range len text set_yticks set_xlim grid axvline set_xlabel set mean set_xticks set_ylabel linspace despine legend savefig median std set_ylim distplot show subplot pyramid_expand cpu text axis colorbar imshow resize ceil numpy clip enumerate len subplots text len tight_layout imshow savefig figure xticks round range yticks | # Generative Graph Transformer [](https://opensource.org/licenses/MIT) PyTorch implementation of <i>Image-Conditioned Graph Generation for Road Network Extraction</i> (https://arxiv.org/abs/1910.14388) ## Overview This library contains a PyTorch implementation of the Generative Graph Transformer (GGT): an autoregressive, attention-based model for image-to-graph generation as presented in [[1]](#citation)(https://arxiv.org/abs/1910.14388), in addition to other baselines discussed in the paper. Find out more about this project in our [blog post](https://davide-belli.github.io/generative-graph-transformer). ## Dependencies See [`requirements.txt`](https://github.com/davide-belli/generative-graph-transformer/tree/master/requirements.txt) * **scipy==1.2.1** * **scikit_image==0.15.0** | 1,852 |
davide-belli/toulouse-road-network-dataset | ['graph generation', 'semantic segmentation'] | ['Image-Conditioned Graph Generation for Road Network Extraction'] | experiments/experiment_maxprevnode_distribution.py utils/utils.py utils/generate_image_arrays.py experiments/experiment_graphsize_distribution.py dataset.py utils/generate_bins.py config.py utils/generate_datapoints.py generate_toulouse_road_network_dataset.py custom_collate_fn_with_coordinates ToulouseRoadNetworkDataset denormalize load_images load_dataset normalize load_raw_images custom_collate_fn generate_plots generate_plots plot_bins_histogram main_generate_bins find_bin get_bins_lengths get_bins extract_bins clockwise_angle_between to_x_coordinate to_nodes_edges order_by_angle main_generate_datapoints edges_to_adjacency_lists merge_duplicate_nodes join_intersection is_mergeable normalize_coordinates is_point_in_square handle_crossings get_possible_lines compute_slope detect_intersection generate_bfs augment to_x_idx check_split generate_dfs find_border_intersection get_valid_line to_y_coordinate get_valid_lines to_y_idx is_line_in_square merge_straight_lines extract_split main_generate_image_arrays CustomImageFolder Square generate_datapoint save_image_bfs Bin ensure_dir euclidean_distance_points get_nodes_from_lines coefficients Point save_image_by_lines save_image_by_nodes_edges line_intersection save_dataset euclidean_distance Line full_frame save_image list LongTensor FloatTensor shuffle append keys cat len append FloatTensor int format convert append to_tensor enumerate pad_sequence list sort zip pad_sequence list sort zip DataLoader clf kdeplot set_title ToulouseRoadNetworkDataset set_xlabel title log10 savefig legend append set_xlim item int print jointplot set_ylabel hist array set_ylim max percentile axvline Counter range distplot numpy len int range add defaultdict get_bins append shapeRecords range enumerate len items list print append len title hist savefig figure time dump plot_bins_histogram print get_bins_lengths Reader extract_bins open append Line union set append Line line_intersection find_border_intersection is_line_in_square b a is_point_in_square get_valid_line append line_intersection append Line euclidean_distance pop join_intersection extend detect_intersection append enumerate list tuple index set append union enumerate list sorted len reversed add dict set append keys range enumerate rad2deg arctan append compute_slope remove list is_mergeable reversed index append range len enumerate append append defaultdict arctan2 append sort clockwise_angle_between pop sorted list reversed add set order_by_angle append enumerate pop sorted list reversed add set order_by_angle append enumerate generate_datapoint arange Bin to_nodes_edges Point save_image round max values open edges_to_adjacency_lists Square merge_duplicate_nodes normalize_coordinates handle_crossings format get_possible_lines generate_bfs to_x_idx save_dataset load check_split time deepcopy generate_dfs print get_valid_lines to_y_idx merge_straight_lines x dict numpy print enumerate dump format print Compose DataLoader open ensure_dir extract_split CustomImageFolder enumerate len a y x b coefficients y x append list array set list map dict zip len makedirs axes set_visible figure autoscale plot right rand up get_nodes_from_lines down ylim clf savefig left xlim enumerate len list plot right up set add down ylim clf savefig zip left xlim enumerate Square plot right up close down ylim clf savefig full_frame left xlim enumerate list plot right up set add down ylim clf savefig zip left xlim enumerate dump len range open | # Toulouse Road Network dataset [](https://opensource.org/licenses/MIT) Python code to generate the Toulouse Road Network dataset as introduced in <i>Image-Conditioned Graph Generation for Road Network Extraction</i> (https://arxiv.org/abs/1910.14388) ## Overview This library contains a PyTorch Dataset Class to use the Toulouse Road Network datasetas presented in [[1]](#citation)(https://arxiv.org/abs/1910.14388), in addition to all the code developed to extract, preprocess, filter, augment and store the dataset. Find out more about this project in our [blog post](https://davide-belli.github.io/toulouse-road-network). ## Dependencies See [`requirements.txt`](https://github.com/davide-belli/toulouse-road-network-dataset/requirements.txt) * **matplotlib==2.2.2** * **torch==1.1.0** | 1,853 |
davide97l/Sentiment-analysis | ['sentiment analysis'] | ['Convolutional Neural Networks with Recurrent Neural Filters'] | train.py bert.py loader.py vocabulary.py main.py model.py right_pad evaluate_one_epoch train_one_epoch SSTDataset load_embedding_matrix get_loaders load_data load_embedding preprocess_data CNN RNN temporal_value DCNN RNF time_distributed train batch2RNNinput evaluate Voc len argmax backward model zero_grad DataLoader train step eval DataLoader join punctuation words set maketrans append split Size len sample zeros enumerate to TensorDataset DataLoader squeeze T flip eval model backward step zero_grad dataset to batch2RNNinput cross_entropy len | # Sentiment-analysis Opinion mining (sometimes known as sentiment analysis or emotion AI) refers to the use of natural language processing, text analysis, computational linguistics, and biometrics to systematically identify, extract, quantify, and study affective states and subjective information. Sentiment analysis is widely applied to the voice of the customer materials such as reviews and survey responses, online and social media, and healthcare materials for applications that range from marketing to customer service to clinical medicine. Industrial circles utilize opinion mining techniques to detect people’s preference for further recommendation, such as movie reviews and restaurant reviews. In this assignment, we need to establish a sentiment classification model for the given sentence. In this project I have implemented a CNN, DCNN, RNN, RNF and BERT model for sentence sentiment classification. ## Dataset preparation Each dataset should be formed by 3 files: `train.txt`, `dev.txt`, `test.txt`, each having the following structure. Make sure the 3 files are placed the same folder. ``` l1 sentence1 l2 sentence2 ... lN sentenceN ``` | 1,854 |
davidinouye/destructive-deep-learning | ['density estimation'] | ['Deep Density Destructors'] | ddl/externals/mlpack/setup.py ddl/autoregressive.py ddl/tests/test_destructors.py ddl/validation.py ddl/local.py ddl/independent.py ddl/utils.py docs/conf.py ddl/tests/test_mixture.py ddl/datasets.py ddl/tests/__init__.py ddl/__init__.py ddl/mixture.py ddl/externals/mlpack/__init__.py ddl/univariate.py ddl/tests/test_densities.py ddl/base.py ddl/externals/mlpack/tests/test_mlpack.py scripts/icml_2018_experiment.py scripts/maf_data.py ddl/linear.py scripts/test_toy_experiment.py ddl/tests/test_datasets.py ddl/externals/mlpack/_mlpack_estimators.py ddl/externals/__init__.py ddl/externals/mlpack/tests/__init__.py scripts/add_default_docstrings.py setup.py ddl/gaussian.py ddl/deep.py docs/create_api.py ddl/tree.py configuration AutoregressiveDestructor BaseDensityDestructor IdentityDestructor BoundaryWarning _NumDimWarning get_n_features DestructorMixin create_implicit_density _InverseTransformer ScoreMixin get_implicit_density CompositeDestructor create_inverse_canonical_destructor _ImplicitDensity get_inverse_canonical_destructor create_inverse_transformer _InverseCanonicalDestructor UniformDensity _check_global_random_state _make_gaussian_grid _make_quadratic _make_manifold_gaussian_grid make_toy_data _make_rotated_uniform _make_sin_wave _make_autoregressive _make_concentric_circles _make_grid _get_y_and_shuffle _make_rbig_sin_wave _make_uniform_grid _Data DeepDestructorCV DeepDestructor _take _consume _compute_precision_cholesky _compute_covariance_cholesky _compute_log_det_cholesky _cholesky_to_full GaussianDensity _estimate_log_gaussian_prob IndependentDensity IndependentInverseCdf IndependentDestructor _SimpleMatrix _IdentityWithScaling BestLinearReconstructionDestructor LinearProjector IdentityLinearEstimator RandomOrthogonalEstimator _HouseholderWithScaling _inverse_transform RandomFeaturePairs ImageFeaturePairs _fit_transform FeatureGroupsDestructor _transform _score_samples GaussianMixtureDensity _GaussianMixtureMixin _create_fitted_mixture _MixtureMixin FirstFixedGaussianMixtureDensity _BayesianGaussianMixtureDensity _MixtureDensity _get_component_array _from_unit _get_inverse_tree _relative_to_absolute_probability TreeDestructor _ArrayedTreeWrapper _absolute_to_relative_probability RandomTreeEstimator _SklearnNode _tree_transform _check_get_tree TreeDensity _to_unit _get_arrayed_tree _check_univariate_X HistogramUnivariateDensity ScipyUnivariateDensity check_X_in_interval_decorator check_X_in_interval get_support_or_default _check_floating check_domain make_finite make_positive make_interior get_domain_or_default has_method make_interior_probability _assert_X_in_interval _assert_warns _relative_diff _InvertibilityError _DestructorError _check_identity_element _sample_demo _check_canonical_domain _check_support _assert_unit_domain _ShouldOnlyBeInTestWarning check_density _UniformabilityError _checking _emd_sample _check_estimator_but_ignore_warnings _check_uniformability _clean_warning_registry _compute_emd _CanonicalDomainError _IdentityElementError _assert_no_warnings check_destructor _check_invertibility _check_score_method _success _check_destructor_interface _ignore_boundary_warnings _get_support_n_features build_mlpack configuration MlpackDensityTreeEstimator test_mlpack_det test_mlpack_density_tree_destructor test_mlpack_det_get_arrayed_tree test_make_toy_data test_histogram_multivariate_density test_gaussian_density test_histogram_univariate_density test_autoregressive_mixture_density test_tree_destructor test_inverse_canonical_destructor test_tree_destructor_with_node_destructor test_pca_destructor test_autoregressive_gaussian_destructor test_normal_independent_destructor test_independent_destructor test_deep_destructor_cv_with_tree_destructor test_histogram_univariate_destructor test_random_linear_householder_destructor test_histogram_multivariate_destructor test_identity_destructor test_deep_destructor_with_tree_destructor test_autoregressive_sklearn_mixture_destructor copy_and_overwrite setup _show_arr _show_functions create_api_rst _show_classes _show_header _show_module _get_pair_estimators _get_inverse_logit_destructor run_experiment _get_copula_destructor _setup_loggers _get_model _get_pair_canonical_destructor _get_experiment_filename_and_label load_experiment_results _preprocess_cifar10 get_maf_data _make_dir _data_dict_to_arr _one_hot_encode _preprocess_mnist _get_maf_mnist _check_maf_data _get_maf_original _save_mnist_recreation_indices _get_cifar10_data_and_labels _data_arr_to_dict _get_maf_cifar10 _logit_transform _get_mnist_raw _flip_augmentation test_toy_destructor _get_toy_destructors_and_names remove set_options add_subpackage Configuration exists n_features_ hasattr nan warn DeprecationWarning warn DeprecationWarning warn maker dot rand check_random_state array T check_random_state randn rand func sin abs int list T check_random_state permutation ones warn dot multinomial vstack linspace eye _get_y_and_shuffle meshgrid round array range list check_random_state multinomial vstack _get_y_and_shuffle sum array range shuffle concatenate deque islice next dot T T less_equal sqrt shape any cholesky T square outer ravel _compute_log_det_cholesky shape dot empty row_norms zip sum enumerate diag sum log shape fit_transform cls ones check_array inv shape sqrt NaN cholesky array value is_leaf nan next value is_leaf nan next check_X_in_interval check_X_in_interval pop _update_stack _from_unit threshold minimum ones is_leaf check_array threshold_out maximum shape domain transform _to_unit array deepcopy destructor value threshold is_leaf create_inverse_canonical_destructor zip check_X_in_interval DataConversionWarning reshape check_array warn array has_method warn str has_method warn str list DataConversionWarning islice warn isnan cycle any array minimum str list check_domain DataConversionWarning tolist min maximum warn copy shape zip max range warn _check_floating _check_floating _check_floating eps _check_floating eps abs array str get_support_or_default _check_estimator_but_ignore_warnings check_random_state _sample_demo tolist clone _check_score_method _success any info sample has_method _get_support_n_features score_samples fit _check_property _check_invertibility _check_uniformability _check_canonical_domain _success _check_destructor_interface info _check_identity_element density_ warn fit_from_density inverse_transform _check_density_attr abs max str _sample_demo exp tolist _checking check_density _check_estimator_but_ignore_warnings copy mean info sample has_method check_random_state transform _get_domain_n_features clone _check_score_method _success get_domain_or_default any assert_array_equal score_samples fit density_ warn fit_from_density str _sample_demo hasattr transpose tolist _emd_sample _plot_data_for_debug info sample has_method float check_random_state _get_domain_n_features clone maximum _avg_p_val _success get_domain_or_default zeros fit _check_nearly_equal _sample_demo check_random_state transform rand clone _success info get_domain_or_default inverse_transform _get_support_n_features fit check_random_state _assert_unit_domain score _success info transform has_method array score_samples fit check_random_state transform rand _success _relative_diff info get_domain_or_default _get_support_n_features fit check_random_state check_domain transpose isnan any check_random_state _compute_emd sample zeros append shape any dist sum score DeprecationWarning maximum copy warn mean assert_array_equal has_method abs array _check_support clear items list hasattr resetwarnings _clean_warning_registry _clean_warning_registry append _assert_no_warnings rand _assert_warns message isinstance warn join remove urlretrieve print getcwd extractall chdir close call realpath dirname mkdir open cythonize build_mlpack add_extension ext_modules append TreeDestructor check_random_state randn print ones rand get_tree_str prune_and_update zeros grow PyDTree range fit threshold ones make_toy_data get_arrayed_tree nan zeros PyDTree array X assert_allclose fit make_toy_data X HistogramUnivariateDensity IndependentDensity GaussianDensity _MixtureDensity rand check_random_state create_inverse_canonical_destructor fit IndependentDestructor IndependentDestructor IndependentDestructor TreeDestructor IndependentDestructor CompositeDestructor CompositeDestructor DeepDestructor DeepDestructorCV AutoregressiveDestructor AutoregressiveDestructor rmtree exists copytree add_stylesheet makedirs time get_maf_data debug _setup_loggers sem strftime _get_model mean dict vstack score_samples fit join debug _get_experiment_filename_and_label _get_pair_estimators CompositeDestructor _get_copula_destructor _get_pair_canonical_destructor CompositeDestructor dict array extend _generate_pixel_spiral stdout setFormatter getLogger addHandler StreamHandler Formatter captureWarnings DEBUG setLevel FileHandler join replace realpath dirname makedirs join RandomState realpath dirname _get_mnist_raw join str urlretrieve int RandomState concatenate print extractall close _get_cifar10_data_and_labels append range _make_dir open print fetch_mldata range _make_dir rand astype float32 _logit_transform hstack astype float32 _logit_transform _flip_augmentation int hstack sqrt reshape zeros makedirs zip get_maf_data print _data_dict_to_arr _get_maf_original warn zip ravel enumerate MNIST join hasattr POWER GAS HEPMASS warn BSDS300 dict realpath dirname trn CIFAR10 append MINIBOONE data join T concatenate fetch_mldata print lexsort warn target argsort realpath vstack dirname print_n_diff zip set_params extend CompositeDestructor AutoregressiveDestructor LinearProjector index make_toy_data _fit_and_score _get_toy_destructors_and_names | davidinouye/destructive-deep-learning | 1,855 |
davidmatthews1uvm/2019-IROS | ['word embeddings'] | ['Word2vec to behavior: morphology facilitates the grounding of language in machines'] | experiments/quadruped.py Pyrosim/pyrosim/simulator/quick_test.py experiments/twig.py experiments/spherebot.py experiments/w2v_vecs.py Pyrosim/pyrosim/tests/test.py Pyrosim/setup.py demos/word2vecDatabase.py Pyrosim/pyrosim/pyrosim.py Pyrosim/pyrosim/bodies.py Pyrosim/pyrosim/robot.py Pyrosim/pyrosim/__init__.py demos/demo.py experiments/w2v_robot.py experiments/job.py robot_factory get_internal_bot Word2VecVectorSpace robot_factory shuffle_vec create_new_job get_internal_bot Quadruped SphereBot Twig W2VRobot make_sure_path_exists Simulator Test_Pyrosim get_internal_bot randint range len rmtree makedirs | # About ## BibTex <pre> @INPROCEEDINGS{Matthews2019morphology, author={D. {Matthews} and S. {Kriegman} and C. {Cappelle} and J. {Bongard}}, booktitle={2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, title={Word2vec to behavior: morphology facilitates the grounding of language in machines.}, year={2019}, doi={10.1109/IROS40897.2019.8967639}, url={<a href="https://ieeexplore.ieee.org/document/8967639">https://ieeexplore.ieee.org/document/8967639</a>}, | 1,856 |
davidmoeljadi/INDRA | ['machine translation'] | ['Building an HPSG-based Indonesian Resource Grammar (INDRA)'] | tsdb/gold/Cendana/postag.py utils/formoin.py utils/ind2yy.py | # Indonesian Resource Grammar (INDRA) This is a medium-sized computational Indonesian grammar in [HPSG](http://hpsg.stanford.edu/) using the [DELPH-IN](http://delph-in.net) infrastructure. It is bootstrapped using the [LinGO Grammar Matrix](http://www.delph-in.net/matrix/). Compile with: ``` ace -g ace/config.tdl -G ind.dat ``` Run it with: ``` echo "saya makan kue" | ace -g ind.dat ``` | 1,857 |
davidqiu1993/Hackauton2018_DolanWins | ['autonomous driving'] | ['Adaptive Performance Assessment For Drivers Through Behavioral Advantage'] | src/calc_adv.py src/train_value_model.py src/train_behavior_model.py src/calc_optimal_placement.py src/preproc_data.py src/plot_train.py main calc_adv main calc_optimal_placement plot_vis_drv main plot_train main proc_data process_fields train_model main build_model train_model main build_model load list std print mean argsort reverse save append array range predict len print load_weights calc_adv print fmin show concatenate print set_xlabel add_subplot set_ylabel linspace figure set_zlabel append zeros plot_surface array range predict load calc_optimal_placement append array range len xlabel ylim ylabel plot show plot_train figure print set_trace range save append DataFrame process_fields read_csv len list T columns hstack set mean shape zeros empty std range len proc_data Sequential Adam add Dense compile fit save_weights load_weights isfile train_model concatenate build_model | # Hackauton 2018 Adapted Performance Assessment For Drivers Through Behavioral Advantage. Reference: ``` @article{qiu2018adaptive, title={Adaptive Performance Assessment For Drivers Through Behavioral Advantage}, author={Qiu, Dicong and Paga, Karthik}, journal={arXiv preprint arXiv:1804.08219}, year={2018}, month={apr} | 1,858 |
davidreiman/deblender | ['semantic segmentation'] | ['Deblending galaxy superpositions with branched generative adversarial networks'] | preprocessing.py deblender/layers.py deblender/__init__.py deblender/opt.py deblender/plotting.py deblender/models.py deblender/graph.py main.py deblender/utils.py get_batch perturb downsample crop_and_downsample main crop plot_batch merge Graph BaseGraph dense leaky_relu prelu conv_block_2d conv_2d batch_norm res_block_2d flatten res_block_1d conv_block_1d subpixel_conv BaseModel VGG19 Generator Discriminator bayesian_optimization make_plot get_total_params get_trainable_params np_to_tfrecords DataSampler to_stdout restore_session concatenate e pi choice AffineTransform uniform power array format print reshape choice listdir array set_aspect show subplots axis subplots_adjust imshow range len join list format get_batch tqdm np_to_tfrecords range makedirs batch_normalization int lower conv1d batch_normalization int lower conv2d int lower conv2d depth_to_space batch_normalization lower conv1d batch_normalization conv2d lower conv2d lower lower items list build_graph evaluate to_stdout print get_best_result add_observation enumerate finalize Bar finish append train next range Study BayesianOptimization str join format text add_subplot axis GridSpec tight_layout subplots_adjust imshow close around savefig Figure range compare_psnr items list format Features TFRecordWriter reshape print SerializeToString write Example append range _dtype_feature import_meta_graph restore latest_checkpoint get_shape trainable_variables trainable_variables analyze_vars items list format hasattr isinstance print | # Deblending galaxy superpositions with branched generative adversarial networks **Authors:** David M. Reiman & Brett E. Göhre **MNRAS:** https://doi.org/10.1093/mnras/stz575 **arXiv:** https://arxiv.org/abs/1810.10098 **Abstract:** Near-future large galaxy surveys will encounter blended galaxy images at a fraction of up to 50% in the densest regions of the universe. Current deblending techniques may segment the foreground galaxy while leaving missing pixel intensities in the background galaxy flux. The problem is compounded by the diffuse nature of galaxies in their outer regions, making segmentation significantly more difficult than in traditional object segmentation applications. We propose a novel branched generative adversarial network (GAN) to deblend overlapping galaxies, where the two branches produce images of the two deblended galaxies. We show that generative models are a powerful engine for deblending given their innate ability to infill missing pixel values occluded by the superposition. We maintain high peak signal-to-noise ratio and structural similarity scores with respect to ground truth images upon deblending. Our model also predicts near-instantaneously, making it a natural choice for the immense quantities of data soon to be created by large surveys such as LSST, Euclid and WFIRST. ## Architecture <img src="/docs/figures/generator.png"><br> <img src="/docs/figures/discriminator.png"> ## Samples <img src="/docs/figures/sample-1.png"><br> | 1,859 |
davidstutz/superpixel-benchmark | ['superpixels'] | ['SEEDS: Superpixels Extracted via Energy-Driven Sampling', 'Superpixels: An Evaluation of the State-of-the-Art'] | lib_wp/demo_waterpixels_smil_with_parser.py gray_area_filtering one_min_per_grid_cell demo_m_waterpixels im_labelled_square_grid_points my_area_filtering my_gradient areaClose Image areaOpen gray_area_filtering Image getSlice splitChannels mergeChannels range gradient_LAB gradient Image copyChannel Image min getNumArray fill range applyLookup measMaxVals computeBlobs Image test measMinVals compare label mul computeBlobs Image watershedExtinction dist getSize CrossSE applyLookup measMaxVals minima one_min_per_grid_cell measMinVals add my_area_filtering test copy label SquSE float basins im_labelled_square_grid_points my_gradient | # Superpixels: An Evaluation of the State-of-the-Art [](https://travis-ci.org/davidstutz/superpixel-benchmark) This repository contains the source code used for evaluation in [1], a large-scale comparison of state-of-the-art superpixel algorithms. **[ArXiv](https://arxiv.org/abs/1612.01601) | [Project Page](http://davidstutz.de/projects/superpixel-benchmark/) | [Datasets](https://github.com/davidstutz/superpixel-benchmark-data) | [Doxygen Documentation](https://davidstutz.github.io/superpixel-benchmark/)** This repository subsumes earlier work on comparing superpixel algorithms: [davidstutz/gcpr2015-superpixels](https://github.com/davidstutz/gcpr2015-superpixels), | 1,860 |
davmre/gprf | ['gaussian processes'] | ['Gaussian Process Random Fields'] | block_clustering.py gpy_shims.py gprfopt.py gpy_linalg.py pdtree_clustering.py synthetic.py seismic/seismic_util.py gprfopt_analyze.py run_seismic.py seismic/combine_clusters.py seismic/scrape_seismic.py seismic/analyze_seismic.py seismic/align_seismic_waves.py seismic/generate_sorted.py gprf.py pair_distances cluster_rpc Blocker check_inv Blocker GPRF symmetrize_neighbors llgrad_joint_shim pair_distances llgrad_unary_shim sample_data SampledData do_optimization load_log grid_centers do_run exp_dir OutOfTimeError do_gpy_gplvm main analyze_run build_run_name gen_runexp gen_runs truegp_run_params eighty_run_params write_plot load_results read_result_line plot_ll main fitc_run_params vis_points load_final_results dtrtri multiple_pdinv pddet dpotri pca symmetrify tdot_blas trace_dot symmetrify_weave symmetrify_numpy force_F_ordered_symmetric cholupdate DSYR_blas ppca DSYR_numpy _mdot_r backsub_both_sides pdinv force_F_ordered tdot_numpy jitchol mdot dtrtrs DSYR tdot dpotrs GPyConstDiagonalGaussian pdtree_cluster PDTree dist_deg analyze_run_result do_optimization cov_prior load_data seismic_exp_dir main dist_km dist_lld sample_y sample_crazy_shape sample_synthetic load_seismic_locations align cluster_waves coordinate_ascent correlate_patches extract_patches correlation_surface coherency align_waves xcorr_valid my_xc offsets distances main mean_distance load_XY fakescrape scrape_isc ev_from_line extract_ev oldmain CouldNotScrapeException main load_events scraped_to_evid_dict norm all choice median len dot abs max defaultdict add seed join set_centers mkdir_p cluster_rpc optimizer_array GPLVM flatten SparseGPLVM obs_std link_parameter Param open GPyConstDiagonalGaussian SY size close copy RBF join time noise_var BayesianGPLVM print minimize write fix X x join time concatenate print minimize write close flatten array log x open join load join prediction_error flush print build_gprf load_log SX write close flatten mean_distance enumerate open ceil sqrt seed exp sample_data randn print reshape grid_centers SX do_optimization X_obs update_X do_gpy_gplvm analyze_run build_gprf len join mkdir_p build_run_name add_argument do_run exp_dir mkdir_p ArgumentParser parse_args load_log join int list items split join wait add_subplot flatten linspace Figure tick_params Popen seed sorted ones SX scatter savefig sum replace set_centers FigureCanvasAgg set_xlim sqrt block_idxs cluster_rpc listdir enumerate load join print grid_centers set_yticks exp_dir set_facecolor split zeros reblock set_ylim len items sorted plot set_xlabel set_xlim add_subplot FigureCanvasAgg set_ylabel set_facecolor savefig legend Figure set_ylim append sqrt defaultdict append sqrt defaultdict defaultdict print sqrt get_nblocks append float join close write append open gen_runexp truegp_run_params fitc_run_params eighty_run_params int gen_runs vis_points print print set_trace dpotrf ascontiguousarray mean any cholesky eye diag asfortranarray force_F_ordered force_F_ordered symmetrify diag sum jitchol log mdot dtrtri log jitchol dpotri sum diag symmetrify force_F_ordered print svd mean std T randn mean shape masked_invalid range c_int c_char data_as c_double strides byref c_void_p asfortranarray dsyrk zeros max symmetrify dsyr c_int c_char data_as c_double byref c_void_p symmetrify symmetrify_weave int T inline shape tril triu_indices_from size inline copy dtrtrs leaf_idx copy PDTree radians cos sqrt sin arcsin dist_deg radians exp reshape pi sum array log len join mkdir_p dist_km threshold open seed rpc_blocksize update_covs load_log init_cov npts update_X close mad seismic_exp_dir enumerate load join task init_x print write load seed copy GPCov threshold arange randn do_optimization synth_lscale obs_std save seed rpc_blocksize pdtree_cluster init_cov npts analyze_run_result neighbors copy seismic_exp_dir load analyze task init_x reshape GPRF load_data array len seed sqrt P randn wfn_params mcov L VectorTree argsort dot coo_matrix array jitchol eye cholesky sparse_training_kernel_matrix dfn_params dfn_str wfn_str seed sample_y GPCov rand sample_crazy_shape max mean sqrt my_xc argmax std len zeros norm inline len argmax my_xc append int copy zip append int zeros align enumerate len dot T array zeros extract_patches my_xc enumerate correlate_patches extract_patches mean zeros dist_km enumerate len correlation_surface argmax permutation len time coord_ascent_run print coherency offsets max range len load_seismic_locations int load basename join argsort select_subset print load_XY mean_distance timegm int timetuple strip float datetime ev_from_line index split get utcfromtimestamp time content extract_ev lat lon mb exp log fakescrape print write mkdir_p load_events enumerate flush open join close range open print range | # Gaussian Process Random Fields This is code for the NIPS 2015 paper by David Moore and Stuart Russell. Aside from the usual dependencies (numpy, scipy, matplotlib), it depends on: - the [treegp](https://github.com/davmre/treegp) package, which contains C++ implementations of several distance functions, kernel functions and their derivatives. In particular, it implements the great-circle distance used in the seismic experiments. In case compatibility is broken at some future point, commit `a0aa7ae65a4b9144a499016bbf0ccaf0c611cc0d` is known to work with this code. - [GPy](https://github.com/SheffieldML/GPy), for comparisons to sparse GP-LVM. Experiments were run using version 0.6.0. Individual synthetic experiments from the paper can be reproduced by running `gprfopt.py` with appropriate options. For example, python gprfopt.py --n=10000 --seed=0 --yd=50 --lscale=0.06 --obs_std=0.02 --noise_var=0.01 --method=l-bfgs-b --local_dist=1.0 --nblocks=100 --task=x --maxsec=18000 will sample a synthetic problem with 10000 points, random seed 0, 50-dimensional output, SE kernel lengthscale 0.06 (note a small difference from the paper: this implementation scales the world to always lie within the unit square, so larger problems correspond to smaller lengthscales) and positional noise stddev 0.02, and output noise variance 0.01, and then attempt to solve this problem by running L-BFGS in a local GP model (local_dist=1.0 specifies a purely local GP, local_dist < 1.0 defines a GPRF and the specific value does not matter) with 100 blocks, solving only for the X locations (not kernel params), and running for a maximum time of 18000 seconds. The results will be saved under the home directory in `~/gprf_experiments/`. A subdirectory is created for each experiment, and the file `results.txt` contains the objective value and mean location error at each step (along with other quantities). After running a synthetic experiment, you can visualize the results, e.g., for the previous example, python gprfopt_analyze.py vis ~/gprf_experiments/10000_10500_100_0.060000_0.020000_1.0000_50_l-bfgs-b_x_-1_0.0100_s0_gprf0/ ~/gprf_experiments/synthetic_datasets/10500_10000_0.060000_0.020000_50_0.pkl 0 will generate a series of images, one for each optimization step, and attempt to stitch them into a video. | 1,861 |
davyneven/SpatialEmbeddings | ['instance segmentation', 'semantic segmentation', 'autonomous driving'] | ['Instance Segmentation by Jointly Optimizing Spatial Embeddings and Clustering Bandwidth'] | src/models/BranchedERFNet.py src/models/__init__.py src/criterions/lovasz_losses.py src/datasets/CityscapesDataset.py src/utils/utils.py src/utils/transforms.py src/train_config.py src/test.py src/train.py src/test_config.py src/utils/generate_crops.py src/models/erfnet.py src/datasets/__init__.py src/criterions/my_loss.py get_args lambda_ train save_checkpoint val get_args lovasz_grad flatten_binary_scores iou binary_xloss xloss lovasz_hinge_flat StableBCELoss lovasz_hinge lovasz_softmax_flat mean flatten_probas lovasz_softmax iou_binary calculate_iou SpatialEmbLoss CityscapesDataset get_dataset BranchedERFNet Decoder DownsamplerBlock Encoder Net UpsamplerBlock non_bottleneck_1d get_model process RandomRotation ToTensor get_transform Resize RandomCrop CropRandomObject Cluster AverageMeter Logger Visualizer update format criterion model print param_groups squeeze AverageMeter zero_grad backward tqdm mean item step enumerate eval print join copyfile save cumsum float sum len mean zip append float sum list map zip append float sum range mean lovasz_hinge_flat data lovasz_grad relu Variable sort dot float view Variable float flatten_binary_scores mean lovasz_softmax_flat data lovasz_grad Variable sort size dot append float abs range size view filterfalse isnan iter next enumerate sum item BranchedERFNet join int format size makedirs logical_and crop where dirname unique save array clip enumerate open append | # Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth This codebase implements the loss function described in: [Instance Segmentation by Jointly Optimizing Spatial Embeddings and Clustering Bandwidth](https://arxiv.org/pdf/1906.11109.pdf) Davy Neven, Bert De Brabandere, Marc Proesmans, and Luc Van Gool Conference on Computer Vision and Pattern Recognition (CVPR), june 2019 Our network architecture is a multi-branched version of [ERFNet](https://github.com/Eromera/erfnet_pytorch) and uses the [Lovasz-hinge loss](https://github.com/bermanmaxim/LovaszSoftmax) for maximizing the IoU of each instance. <p align="center"> <img src="static/teaser.jpg" /> </p> ## License | 1,862 |
dayihengliu/Text-Infilling-Gradient-Search | ['text infilling', 'text generation'] | ['TIGS: An Inference Algorithm for Text Infilling with Gradient Search'] | Util/myAttWrapper.py Util/myLM.py Util/my_seq2seq.py Util/my_helper.py Util/GSutil.py Model.py Util/myAttoLM.py Util/my_seq2seq_Diverse.py Util/bleu.py bleu.py Util/myAttLM.py Util/myAttLM_Diverse.py Util/DiverseDecode.py Util/myUtil.py Util/myResidualCell.py Util/myAttmoLM.py fetch_data geometric_mean _count_ngram BLEU clip_count best_length_match count_ngram get_reference_count _BLEU brevity_penalty LM fetch_data geometric_mean BLEU clip_count best_length_match count_ngram brevity_penalty _check_maybe _get_scores BeamSearchDecoder BeamSearchDecoderOutput _maybe_tensor_gather_helper _tensor_gather_helper _beam_search_step _length_penalty tile_batch BeamSearchDecoderState _mask_probs FinalBeamSearchDecoderOutput _tile_batch cal_candidate_list _init_model show_blank _construct_blank _init_blank cal_optimal idx2str cal_dist replace_list str2idx _init_data _reload LM LM LM LM _bahdanau_score SelfAttWrapper SelfAttOtWrapper SelfAttMulOtWrapper _luong_score LM ResidualWrapper gnmt_residual_fn LM_util LM_DP _unstack_ta MyHelper Seq2Seq Seq2Seq_util Seq2Seq_DP Seq2Seq Seq2Seq_util Seq2Seq_DP join readlines append walk open lower split append float range brevity_penalty len lower split float range brevity_penalty len list min max keys abs float exp lower split append range len append count_ngram range geometric_mean _count_ngram range append geometric_mean convert_to_tensor concatenate reshape concat shape set_shape tile expand_dims ndims flatten isinstance TensorArray mod to_int32 concat _tensor_gather_helper logical_not BeamSearchDecoderState top_k set_shape map_structure to_int64 multiply squeeze shape cast append expand_dims constant_value convert_to_tensor finished one_hot _get_scores log_softmax BeamSearchDecoderOutput lengths _mask_probs equal constant reshape float32 log_probs scatter_nd logical_or split _length_penalty convert_to_tensor set_shape constant_value one_hot reshape concat tile expand_dims _check_maybe append int tolist len join strip deepcopy append range X_w2id len enumerate cal_candidate_list tolist argsort deepcopy idx2str tolist infer append range len load open restore LM_util Graph dict LM_DP Session dict restore dtype squeeze matmul expand_dims get_variable dtype rsqrt square reduce_sum expand_dims get_variable map_structure | # TIGS: An Inference Algorithm for Text Infilling with Gradient Search This repo contains the code and data of the following paper: >**TIGS: An Inference Algorithm for Text Infilling with Gradient Search**, *Dayiheng Liu, Jie Fu, Pengfei Liu, Jiancheng Lv*, Association for Computational Linguistics. **ACL** 2019 [[arXiv]](https://arxiv.org/abs/1905.10752) ## Overview <p align="center"><img width="80%" src="1.png"/></p> Given a well-trained sequential generative model, generating missing symbols conditioned on the context is challenging for existing greedy approximate inference algorithms. We propose a dramatically different inference approach called Text Infilling with Gradient Search (**TIGS**), in which we search for infilled words based on gradient information to fill in the blanks. To the best of our knowledge, this could be the first inference algorithm that does not require any modification or training of the model and can be broadly used in any sequence generative model to solve the fillin-the-blank tasks. ## Dependencies - Jupyter notebook 4.4.0 - Python 3.6 - Tensorflow 1.6.0+ | 1,863 |
dbtmpl/OPMask | ['instance segmentation', 'semantic segmentation'] | ['Prior to Segment: Foreground Cues for Weakly Annotated Classes in Partially Supervised Instance Segmentation'] | opmask/modeling/__init__.py opmask/utils/training_utils.py opmask/engine/__init__.py start.py opmask/utils/class_splits.py opmask/engine/trainer.py opmask/__init__.py opmask/layers/batch_norm.py opmask/modeling/box_head.py opmask/modeling/roi_heads.py opmask/utils/general.py opmask/evaluation/__init__.py opmask/modeling/mask_head.py opmask/evaluation/coco_partially_supervised.py opmask/utils/cam_utils.py main train_or_eval perform_eval GeneralTrainer _evaluate_predictions_on_coco_ps PartiallySupervisedEvaluator NaiveSyncBatchNorm AllReduce get_norm CAMBoxHeadConv build_box_head CamMaskHead MetaCamMaskHead FPNCamRoiHeads process_cams_batch normalize_batch normalize_and_interpolate_batch add_opmask_cfg save_exp_setup overall_setup default_setup setup_paths create_experiment_directory get_gt_masks mask_loss_ps get_ps_mask verify_results resume_or_load build_model test WEIGHTS is_main_process eval_only resume_or_load trainer overall_setup pop deepcopy evaluate COCOeval summarize accumulate loadRes array len NAME cat normalize_batch size view CN add_opmask_cfg merge_from_file save_exp_setup config_file get_cfg merge_from_list default_setup opts freeze setup_paths folder_name join lower join rmtree make_archive exists makedirs OUTPUT_DIR str read format collect_env_info join info config_file CUDNN_BENCHMARK setup_logger get_world_size resume eval_only abspath get_rank OUTPUT_DIR create_experiment_directory seed_all_rng append tensor to put_scalar get_ps_mask get_event_storage put_image size extend numel sigmoid stack binary_cross_entropy_with_logits item append to max cat enumerate | # OPMask [](https://www.python.org/) [](https://numpy.org/doc/1.18/) [](https://pytorch.org/) [](https://pytorch.org/) This repository provides the official implementation of the paper: > **[Prior to Segment: Foreground Cues for Weakly Annotated Classes in Partially Supervised Instance Segmentation](https://openaccess.thecvf.com/content/ICCV2021/html/Biertimpel_Prior_to_Segment_Foreground_Cues_for_Weakly_Annotated_Classes_in_ICCV_2021_paper.html) (ICCV 2021)** <br> > *†[David Biertimpel](https://scholar.google.com/citations?user=AIu7ihgAAAAJ&hl=en), †[Sindi Shkodrani](https://scholar.google.nl/citations?user=fFVkKNgAAAAJ&hl=en), *[Anil S. Baslamisli](https://scholar.google.nl/citations?user=mc4l2J4AAAAJ&hl=en) and †[Nóra Baka](https://scholar.google.com/citations?user=ahfzQHEAAAAJ&hl=en) <br> > *University of Amsterdam, †TomTom<br> > pre-print : https://arxiv.org/abs/2011.11787 <br>  ## Abstract | 1,864 |
ddkang/advex-uar | ['adversarial defense'] | ['Testing Robustness Against Unforeseen Adversaries'] | advex_uar/attacks/pgd_attack.py advex_uar/analysis/compute_ata.py advex_uar/attacks/attacks.py advex_uar/attacks/elastic.py advex_uar/common/loader.py advex_uar/analysis/compute_uar.py advex_uar/attacks/gabor.py advex_uar/analysis/calibrate_eps.py advex_uar/attacks/fog_attack.py advex_uar/examples/train.py advex_uar/eval/cifar10c.py advex_uar/attacks/jpeg_attack.py advex_uar/eval/evaluator.py advex_uar/attacks/__init__.py advex_uar/attacks/fog.py advex_uar/common/pyt_common.py advex_uar/train/__init__.py advex_uar/logging/logger.py advex_uar/attacks/snow_attack.py advex_uar/common/flags_holder.py advex_uar/attacks/jpeg.py advex_uar/logging/get_wandb_logs.py advex_uar/common/models/cifar10_resnet.py advex_uar/attacks/elastic_attack.py advex_uar/logging/merge_logs.py advex_uar/__init__.py advex_uar/common/__init__.py advex_uar/eval/__init__.py advex_uar/attacks/fw_attack.py advex_uar/train/trainer.py advex_uar/attacks/gabor_attack.py advex_uar/attacks/snow.py setup.py advex_uar/examples/eval.py main get_defenses parse_logs get_attack get_attacks is_type_match get_ata_val get_defense_from_run_id get_defense main get_attack_types compute_uar main get_defense_run_ids ImagenetTransform PixelModel InverseImagenetTransform get_eps_params get_imagenet_params AttackWrapper GaussianSmoothing ElasticDeformation ElasticAttack fog_creator FogAttack FrankWolfeAttack gabor_rand_distributed get_gabor_with_sides valid_position normalize_var get_gabor normalize GaborAttack c_quantize JPEG ycbcr_to_rgb_jpeg rgb_to_ycbcr quality_to_factor jpeg_compress_decode tensordot_pytorch upsampling_420 c_dequantize image_to_patches y_quantize rgb_to_ycbcr_jpeg make_quantization_tables patches_to_image dct_8x8 downsampling_420 idct_8x8 idct_8x8_ref y_dequantize dct_8x8_ref ycbcr_to_rgb JPEGAttack PGDAttack GaussianSmoothing snow_creator trapez make_kernels weighted_line SnowAttack apply_snow FlagHolder StridedImageFolder get_attack get_imagenet_model init_logger get_step_size get_model get_cifar10_model _get_attack resnet110 resnet20 ResNet LambdaLayer resnet44 test resnet1202 resnet56 resnet32 _weights_init BasicBlock CIFAR10C BaseEvaluator CIFAR10Evaluator Accumulator CIFAR10CEvaluator ImagenetEvaluator ImagenetCEvaluator norm_to_pil_image main get_ckpt run main extract_summary dump_many_runs dump_single_run configure_wandb Logger init_wandb main CIFAR10Trainer BaseTrainer accuracy correct Metric ImagenetTrainer join sorted format print linear_sum_assignment append abs enumerate append get_attack append get_defense append is_type_match get_defense_from_run_id get_defenses parse_logs get_attacks get_ata_val append append str sorted format list items print get_attack_types append sum compute_uar get_defense_run_ids get_defense_from_run_id stack unsqueeze append full range append full range filldiamonds min cuda fillsquares size view exp cos pi shape sin meshgrid zeros get_gabor range rfft view size pow sqrt abs conv2d size view list reshape matmul shape iter dim range len T view size tensordot_pytorch as_tensor T view size tensordot_pytorch as_tensor avg_pool2d size transpose view list product outer zeros array range list product view size cos pi tensordot_pytorch cuda zeros as_tensor array range fill as_tensor empty T y_table rounding c_table rounding list product outer zeros array range list product view size cos pi tensordot_pytorch cuda zeros as_tensor array range int size transpose view repeat T view size tensordot_pytorch tensor as_tensor T view size tensordot_pytorch tensor cuda list downsampling_420 idct_8x8 ycbcr_to_rgb_jpeg clamp transpose patches_to_image dct_8x8 c_quantize pad stack upsampling_420 image_to_patches y_quantize rgb_to_ycbcr_jpeg keys split arange reshape reduce flatten repeat ceil int GaussianSmoothing randint range choice uniform ceil zeros gaussian_blur cuda weighted_line append pow conv2d cuda range cat max join format getcwd strftime Logger makedirs weight __name__ kaiming_normal print uint8 rollaxis add_ mul_ zip Tensor numpy join format wandb_username print wandb_ckpt_project dir file wandb_run_id import_module rename ckpt_path download Api run use_max_step dataset get_ckpt _dict initialize step_size FlagHolder load_state_dict init_logger attack class_downsample_factor Evaluator resnet_size evaluate n_iters get_attack use_wandb summary get_step_size get_model wandb_project epsilon run print config format extract_summary append extract_summary format print runs dump_single_run dump_many_runs Api config list items print id init setattr | Testing Robustness Against Unforeseen Adversaries ================================================= This repository contains code and trained models for the paper [Testing Robustness Against Unforeseen Adversaries](http://arxiv.org/abs/1908.08016) by [Daniel Kang*](https://ddkang.github.io/), [Yi Sun*](https://yisun.io), [Dan Hendrycks](https://people.eecs.berkeley.edu/~hendrycks/), [Tom Brown](https://github.com/nottombrown), and [Jacob Steinhardt](https://www.stat.berkeley.edu/~jsteinhardt/). More specifically, we provide implementations and code to evaluate the Unforeseen Attack Robustness (UAR) metric from [our paper](http://arxiv.org/abs/1908.08016) for the following adversarial attacks: L<sub>∞</sub>, L<sub>2</sub>, L<sub>1</sub>, L<sub>∞</sub>-JPEG, L<sub>2</sub>-JPEG, L<sub>1</sub>-JPEG, Elastic, Fog, Gabor, Snow. For each attack, we release calibrated distortion sizes and adversarially trained ResNet-50 models at these sizes for ImageNet-100, a 100-class subset of ImageNet. Table of contents ================= <!--ts--> * [What is Unforeseen Adversarial Robustness (UAR)?](#what-is-adversarial-robustness-transfer-uar) * [Installation](#installation) * [Usage](#usage) | 1,865 |
de-simone/MarkovChainDensityEstimator | ['density estimation'] | ['Nonparametric Density Estimation from Markov Chains'] | MCDE.py MCDensityEstimator | # Markov Chain Density Estimator (MCDE) MCDE is a novel way to estimate the PDF of a distribution based on the properties of Markov Chains. This estimator can be seen as a modified version of Kernel Density Estimation (KDE) as well. More details can be found on [20xx.xxxx](https://arxiv.org/) All data science tasks that have a density based approach can be tackled using this estimator. We have showed that local anomaly detection based on MCDE is very effective. We have also showed that this density estimator can perform better than KDE for a large enough sample. ### Installation MCDE works in both python 2 and python 3. It requires: numpy, scipy, mcint and sklearn. In order to import the main class just copy the file [MCDE.py](./MCDE.py) to your working directory and add on top of your python file: ``` from MCDE import MCDensityEstimator ``` ### Parameters | 1,866 |
deNsuh/segan-pytorch | ['speech enhancement'] | ['SEGAN: Speech Enhancement Generative Adversarial Network'] | data_preprocess.py vbnorm.py data_generator.py emph.py emph_test.py model.py AudioSampleGenerator downsample_16k verify_data process_and_serialize slice_signal de_emphasis pre_emphasis TestEmphasis sample_latent Generator split_pair_to_vars Discriminator load join format print walk join format print abspath run walk makedirs load int append range len join time format zip print slice_signal save array walk enumerate makedirs stack to numpy pre_emphasis | # Pytorch Implementation of SEGAN (Speech Enhancement GAN) Implementation of [SEGAN](https://arxiv.org/abs/1703.09452) by Pascual et al. in 2017, using pytorch. Original Tensorflow version can be found [here](https://github.com/santi-pdp/segan). ## Prerequisites - python v3.5.2 or higher - pytorch v0.4.0 - CUDA preferred - noisy speech dataset downloaded from [here](https://datashare.is.ed.ac.uk/handle/10283/2791) - libraries specified in `requirements.txt` ### Installing Required Libraries | 1,867 |
debadeepta/vnla | ['imitation learning'] | ['Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention'] | code/tasks/VNLA/eval.py code/tasks/VNLA/env.py code/tasks/VNLA/oracle.py code/tasks/VNLA/verbal_ask_agent.py code/tasks/VNLA/train.py code/tasks/VNLA/utils.py code/src/driver/driver.py code/tasks/VNLA/flags.py code/tasks/VNLA/agent.py code/tasks/VNLA/model.py code/tasks/VNLA/ask_agent.py BaseAgent AskAgent EnvBatch VNLABatch Evaluation make_parser EncoderLSTM Attention AttentionSeq2SeqModel AskAttnDecoderLSTM NextOptimalOracle make_oracle StepByStepSubgoalOracle MultistepShortestPathOracle AskOracle ShortestPathOracle load setup compute_ask_stats set_path train_val save train load_region_label_to_name load_datasets read_subgoal_vocab load_nav_graphs write_vocab asMinutes load_panos_to_region read_vocab load_img_features load_region_map timeSince Tokenizer build_vocab VerbalAskAgent add_argument ArgumentParser join data_dir model_prefix getenv exp_dir makedirs getattr set_path vars setattr len extend Counter append sum enumerate write_results score ask_losses save add_is_success defaultdict list nav_losses append range replace param_groups test results_path log_every join time items losses print min average exp_dir array seed join write_vocab data_path manual_seed build_vocab device exists setup AskAgent subgoal_vocab exit Adam pprint load_state_dict to device_id Tokenizer log_every VerbalAskAgent read_vocab vars load_path load join print n_iters data_path parameters external_main_vocab VNLABatch getenv getenv compile update list load_datasets split_sentence Counter append most_common Tokenizer split print print set floor time print set print | # Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention [](https://opensource.org/licenses/MIT) <img src="teaser/pytorch-logo-dark.png" width="10%"> Authors: [Khanh Nguyen](https://khanhptnk.github.io), [Debadeepta Dey](http://www.debadeepta.com/), [Chris Brockett](https://www.microsoft.com/en-us/research/people/chrisbkt/), [Bill Dolan](https://www.microsoft.com/en-us/research/people/billdol/). This repo contains code and data-downloading scripts for the paper [Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention](https://arxiv.org/abs/1812.04155) (CVPR 2019). We present Vision-based Navigation with Language-based Assistance (VNLA, pronounced as *"Vanilla"*), a grounded vision-language task where an agent with visual perception is guided via language to find objects in photorealistic indoor environments.  <p align="center"> <a href="https://www.youtube.com/watch?v=Vp6C29qTKQ0&feature=youtu.be" target="_blank"><img src="teaser/vnla_demo_video.png" alt="IMAGE ALT TEXT HERE" width="560" height="315" border="10"/></a> </p> ### Development system | 1,868 |
debjitpaul/Multi-Hop-Knowledge-Paths-Human-Needs | ['common sense reasoning'] | ['Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human Needs'] | src/neural_model/evaluator.py src/neural_model/Human_needs_with_knowledge_elmo.py src/graph_model/make_sub_graph_server.py src/neural_model/experiment.py src/graph_model/conceptnet2graph.py src/neural_model/experiment_reiss_without.py src/data_prep/read_pluctik_emotion.py src/neural_model/Human_needs_elmo_without.py src/data_prep/read_human_needs_reiss.py src/graph_model/extract_path.py main read_file main read_file main ontology_create page_rank_path recurive_path page_rank get_top_concepts get_top_need get_neighours get_path closeness_path out_degree extract_paths_between_concepts_human_needs path read_file personalized_page_rank betweenness_path closeness main betweenness extract_paths_between_concepts make_sub_graph main read_file extract_paths_between_concepts MLTEvaluator create_batches_of_sentence_ids run_experiment padding is_float parse_config process_sentences read_input_files create_batches_of_sentence_ids run_experiment padding is_float parse_config process_sentences read_input_files Human_needs print range enumerate len add_argument read_file txtfile ArgumentParser parse_args add_vertices list Graph print add_edges summary zip union range write_pickle len ontology_create strip literal_eval open append vs append personalized_pagerank nlargest neighbors append personalized_pagerank nlargest neighbors simplify recurive_path index get_neighours sort index append pagerank append neighbors range len update join replace get_all_shortest_paths get_top_need get_top_concepts nlargest extract_paths_between_concepts_human_needs extend split append range len append range len remove personalized_page_rank append range len personalized_pagerank closeness pagerank pagerank betweenness str purpose read Graph get_path input_path write output_path close graph_path open make_sub_graph append list get_all_shortest_paths Graph subgraph strip len extend literal_eval set neighbors append union range write_pickle extract_paths_between_concepts graphpath outputpath list zip pad_sequences list zip get items isdigit read is_float getboolean OrderedDict getfloat SafeConfigParser getint float int OrderedDict append range len create_batches_of_sentence_ids process_batch str print write shuffle close get_results MLTEvaluator append_data open parse_config save Human_needs seed str list restore session construct_network range read_input_files shuffle initialize_session zip build_vocabs preload_word_embeddings get_parameter_count_without_word_embeddings items get_parameter_count remove print process_sentences split open close | # Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human Needs (NAACL 2019) This directory contains the following parts of the 'Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human Needs' experiment. We present a novel method to extract, rank, filter and select multi-hop relation paths from a commonsense knowledge resource to interpret the expression of sentiment in terms of their underlying human needs.  ## Requirements ~~~~ - python3+ - nltk - igraph - Tensorflow 1.12.0 - Tensorflow-hub | 1,869 |
debmandal/Tensorized_MSM | ['causal inference', 'time series'] | ['Weighted Tensor Completion for Time-Series Causal Inference'] | GEN_DATA/gendata.py ANALYSIS/compute_mse.py ESTIMATION/tensor_completion.py ESTIMATION/progressbar.py ANALYSIS/wta_robins_glm.py ESTIMATION/tensor_completion_hist.py ESTIMATION/wts_estimation.py printArray printProgressBar weighted_tensor_completion run_outiter estimated_atet weighted_tensor_completion run_outiter estimated_atet get_weights get_WW get_prob gen_tensor get_atet gen_data print str range len print int float format int norm join list clip zeros_like kruskal_to_tensor min shape log2 uniform parafac zeros range len int join min shape log2 zeros range load str print save zeros get_weights estimated_atet range weighted_tensor_completion ones shape dot expit normal expit ones min dot shape zeros abs get_prob range printProgressBar list normal ndarray expit zeros_like join int print min dot binomial zeros abs range len uniform norm kruskal_to_tensor int normal expit ndarray join min dot shape log2 binomial zeros abs range int normal join ndarray sum min mean shape log2 sqrt zeros abs std range | ## Tensorized_MSM This repository maintains the code for the paper [Weighted Tensor Completion for Time-Series Causal Inference](https://arxiv.org/abs/1902.04646). * Terminology * We have two kinds of worlds -- thin and fat. They respectively refer to the worlds narrow and wide in the paper. * For the experiment, the values of N and T are set as N=500 and T=10 for the thin world, and N=10 and T=500 for fat world, but they can be adjusted in the code. * We have two kinds of policies -- I and II. They respectively refer to simple and complex policies in the paper. - Description of the code * Folder [./GEN_DATA](https://github.com/debmandal/Tensorized_MSM/tree/main/GEN_DATA) contains the script to generate the datasets. * The main file is gendata.py. It takes as input three variables -- an iteration count (it), type of world, and type of policy. For example, suppose you want to generate 23-rd example for world thin and policy II, then run the following command: ``` | 1,870 |
decisionforce/pgdrive | ['autonomous driving'] | ['Improving the Generalization of End-to-End Driving through Procedural Generation'] | pgdrive/component/blocks/straight.py pgdrive/tests/test_functionality/test_random_engine.py pgdrive/engine/engine_utils.py pgdrive/utils/coordinates_shift.py pgdrive/utils/draw_top_down_map.py pgdrive/engine/core/manual_controller.py pgdrive/component/vehicle_module/navigation.py pgdrive/base_class/randomizable.py pgdrive/envs/safe_pgdrive_env.py pgdrive/utils/cutils.py pgdrive/envs/marl_envs/marl_parking_lot.py pgdrive/tests/local_tests/local_test_apply_action.py pgdrive/component/vehicle_module/base_camera.py pgdrive/envs/marl_envs/marl_bottleneck.py pgdrive/component/buildings/base_building.py pgdrive/tests/test_functionality/test_episode_release.py pgdrive/component/blocks/bottleneck.py pgdrive/tests/test_functionality/test_cull_scene.py pgdrive/tests/test_env/test_top_down_env.py pgdrive/tests/vis_block/vis_big.py pgdrive/engine/core/our_pbr.py pgdrive/obs/top_down_obs_multi_channel.py pgdrive/tests/test_functionality/test_config_consistency.py pgdrive/tests/vis_funtionality/vis_two_speed_retrieve.py pgdrive/component/blocks/fork.py pgdrive/component/blocks/std_t_intersection.py pgdrive/component/blocks/intersection.py pgdrive/component/vehicle_module/rgb_camera.py pgdrive/utils/config.py pgdrive/policy/base_policy.py pgdrive/component/algorithm/BIG.py pgdrive/tests/test_functionality/test_gen_map_read.py pgdrive/manager/agent_manager.py pgdrive/engine/core/force_fps.py pgdrive/tests/vis_env/vis_acc_break.py pgdrive/tests/test_functionality/test_destroy.py pgdrive/utils/random_utils.py pgdrive/base_class/base_object.py pgdrive/tests/vis_block/vis_in_ramp.py pgdrive/component/blocks/ramp.py documentation/source/conf.py pgdrive/component/vehicle_module/lidar.py pgdrive/manager/traffic_manager.py pgdrive/utils/generate_maps.py pgdrive/component/map/pg_map.py pgdrive/tests/scripts/profile_reset.py pgdrive/tests/scripts/profile_top_down_multi_channel_env.py pgdrive/tests/test_functionality/test_reborn.py pgdrive/tests/test_env/test_ma_env_force_reset.py pgdrive/component/vehicle/vehicle_type.py pgdrive/engine/core/collision_callback.py pgdrive/policy/AI_protect_policy.py setup.py pgdrive/tests/vis_block/vis_a_small_town.py pgdrive/tests/test_component/_test_osm_read.py pgdrive/envs/marl_envs/marl_inout_roundabout.py pgdrive/envs/marl_envs/__init__.py pgdrive/engine/__init__.py pgdrive/engine/interface.py pgdrive/examples/ppo_expert/_evaluate_expert.py pgdrive/tests/scripts/profile_env.py pgdrive/tests/test_component/test_utils.py pgdrive/component/lane/waypoint_lane.py pgdrive/tests/test_env/_test_change_friction_density_envs.py pgdrive/examples/render_big.py pgdrive/tests/vis_block/vis_parking_lot.py pgdrive/tests/vis_funtionality/vis_vehicle_num.py pgdrive/envs/marl_envs/multi_agent_pgdrive.py pgdrive/component/blocks/std_intersection.py pgdrive/tests/test_env/test_safe_env.py pgdrive/tests/vis_env/vis_argoverse.py pgdrive/utils/__init__.py pgdrive/engine/core/sky_box.py pgdrive/tests/vis_funtionality/vis_render_msg.py pgdrive/examples/profile_pgdrive.py pgdrive/envs/pgdrive_env.py pgdrive/tests/vis_block/vis_yy.py pgdrive/manager/replay_record_system.py pgdrive/engine/core/engine_core.py pgdrive/tests/test_component/test_detector_mask.py pgdrive/policy/env_input_policy.py pgdrive/tests/vis_block/vis_block_base.py pgdrive/tests/test_env/test_ma_roundabout_env.py pgdrive/tests/vis_env/vis_pgdrive_env.py pgdrive/tests/scripts/capture_obs.py pgdrive/tests/test_component/test_config.py pgdrive/tests/vis_block/vis_bottleneck.py pgdrive/tests/vis_funtionality/vis_installation_headless.py pgdrive/engine/core/onscreen_message.py pgdrive/component/vehicle_module/depth_camera.py pgdrive/obs/top_down_obs_impl.py pgdrive/__init__.py pgdrive/component/blocks/base_block.py pgdrive/component/map/base_map.py pgdrive/component/blocks/pg_block.py pgdrive/component/blocks/argoverse_block.py pgdrive/tests/test_functionality/test_get_closest_lane.py pgdrive/tests/vis_env/vis_safe_pgdrive.py pgdrive/engine/base_engine.py pgdrive/tests/scripts/profile_pgdrive_v2.py pgdrive/tests/test_component/test_asset_loader.py pgdrive/tests/test_env/test_ma_tollgate.py pgdrive/tests/test_functionality/test_distance_detector.py pgdrive/utils/math_utils.py pgdrive/engine/core/image_buffer.py pgdrive/tests/test_env/_test_remote_pgdrive_env.py pgdrive/tests/vis_block/vis_out_ramp.py pgdrive/tests/vis_block/vis_std_t_intersection.py pgdrive/utils/utils.py pgdrive/base_class/configurable.py pgdrive/component/lane/straight_lane.py pgdrive/manager/map_manager.py pgdrive/component/buildings/tollgate_building.py pgdrive/examples/ppo_expert/numpy_expert.py pgdrive/component/static_object/__init__.py pgdrive/examples/enjoy_saver.py pgdrive/examples/ppo_expert/remove_useless_state.py pgdrive/tests/vis_funtionality/vis_highway_render.py pgdrive/component/blocks/curve.py pgdrive/base_class/base_runnable.py pgdrive/tests/vis_funtionality/vis_depth_cam_ground.py pgdrive/tests/vis_funtionality/vis_depth_cam_no_ground.py pgdrive/tests/scripts/profile_top_down_v2.py pgdrive/tests/vis_env/vis_pgdrive_env_v2.py pgdrive/tests/vis_funtionality/vis_mini_map.py pgdrive/examples/draw_maps.py pgdrive/register.py pgdrive/tests/test_functionality/test_marl_reborn.py pgdrive/envs/__init__.py pgdrive/engine/physics_node.py pgdrive/tests/vis_block/vis_intersection.py pgdrive/tests/test_functionality/_test_save_episode.py pgdrive/tests/vis_block/vis_no_render.py pgdrive/manager/base_manager.py pgdrive/tests/test_functionality/test_obs_action_space.py pgdrive/envs/marl_envs/marl_intersection.py pgdrive/component/map/city_map.py pgdrive/engine/core/chase_camera.py pgdrive/envs/base_env.py pgdrive/tests/vis_funtionality/vis_manual_control_top_down_env.py pgdrive/obs/state_obs.py pgdrive/tests/test_env/local_test_pgdrive_rgb_depth.py pgdrive/tests/test_functionality/test_nested_config.py pgdrive/component/blocks/t_intersection.py pgdrive/tests/test_functionality/test_loading_map_from_json.py pgdrive/tests/test_functionality/test_marl_infinite_agents.py pgdrive/engine/scene_cull.py pgdrive/tests/vis_funtionality/vis_rgb_cam.py pgdrive/component/blocks/roundabout.py pgdrive/engine/core/light.py pgdrive/component/vehicle_module/distance_detector.py pgdrive/obs/observation_base.py pgdrive/tests/test_functionality/test_reward_cost_done.py pgdrive/tests/vis_block/vis_std_intersection.py pgdrive/utils/space.py pgdrive/tests/vis_env/vis_multi_agent_env.py pgdrive/tests/test_functionality/test_navigation.py pgdrive/obs/image_obs.py pgdrive/component/blocks/__init__.py pgdrive/tests/test_env/test_ma_intersection.py pgdrive/component/lane/circular_lane.py pgdrive/tests/test_functionality/test_obs_noise.py pgdrive/tests/test_functionality/test_get_map_image.py pgdrive/component/map/argoverse_map.py pgdrive/component/vehicle_module/mini_map.py pgdrive/examples/__init__.py pgdrive/engine/core/physics_world.py pgdrive/engine/asset_loader.py pgdrive/policy/manual_control_policy.py pgdrive/component/highway_vehicle/controller.py pgdrive/tests/test_functionality/test_object_collision_detection.py pgdrive/tests/scripts/generate_video_for_image_obs.py pgdrive/envs/marl_envs/marl_tollgate.py pgdrive/engine/core/terrain.py pgdrive/tests/scripts/profile_top_down_env.py pgdrive/component/road/road_network.py pgdrive/tests/vis_block/vis_curve_block.py pgdrive/utils/scene_utils.py pgdrive/tests/test_functionality/test_expert_performance.py pgdrive/component/road/road.py pgdrive/tests/vis_funtionality/vis_installation.py pgdrive/obs/top_down_renderer.py pgdrive/component/vehicle/vehicle_utils.py pgdrive/tests/local_tests/local_test_close_and_restart.py pgdrive/component/algorithm/blocks_prob_dist.py pgdrive/obs/top_down_obs.py pgdrive/tests/test_functionality/_test_record_restore_marl.py pgdrive/tests/vis_block/vis_straight_block.py pgdrive/tests/test_env/test_build_city.py pgdrive/component/blocks/create_block_utils.py pgdrive/tests/test_env/test_ma_bottleneck_env.py pgdrive/tests/vis_block/vis_t_intersection.py pgdrive/examples/enjoy_expert.py pgdrive/envs/top_down_env.py pgdrive/tests/vis_funtionality/vis_memory_leak.py pgdrive/tests/test_functionality/test_ego_vehicle.py pgdrive/component/static_object/base_static_object.py pgdrive/tests/vis_funtionality/vis_saver.py pgdrive/constants.py pgdrive/tests/test_env/_test_action_repeat_env.py pgdrive/tests/vis_funtionality/vis_build_city.py pgdrive/tests/test_env/test_ma_parking_lot.py pgdrive/tests/test_functionality/test_collision.py pgdrive/manager/object_manager.py pgdrive/component/blocks/first_block.py pgdrive/tests/test_env/test_naive_multi_agent.py pgdrive/component/road/__init__.py pgdrive/component/vehicle/base_vehicle.py pgdrive/policy/idm_policy.py pgdrive/tests/test_functionality/test_out_of_road.py pgdrive/manager/__init__.py pgdrive/component/highway_vehicle/kinematics.py pgdrive/component/static_object/traffic_object.py pgdrive/tests/test_component/_test_argoverse_map_read.py pgdrive/component/vehicle_module/PID_controller.py pgdrive/component/highway_vehicle/behavior.py pgdrive/examples/ppo_expert/__init__.py pgdrive/component/blocks/parking_lot.py pgdrive/base_class/nameable.py pgdrive/tests/scripts/profile_map_generation.py pgdrive/tests/test_policy/test_idm_policy.py pgdrive/tests/vis_block/vis_roundabout.py pgdrive/component/lane/abs_lane.py pgdrive/examples/enjoy_manual.py pgdrive/component/lane/argoverse_lane.py pgdrive/tests/scripts/benchmark_brake.py pgdrive/component/blocks/tollgate.py pgdrive/manager/spawn_manager.py is_win is_mac Mask CamMask CollisionGroup LineColor DrivableAreaProperty LineType Decoration Goal TerminationState BodyName get_env_list BaseObject PhysicsNodeList BaseRunnable Configurable Nameable Randomizable NextStep BIG BigGenerateMethod PGBlockConfig ArgoverseBlock BaseBlock Bottleneck Merge Split get_lanes_on_road ExtendStraightLane CreateAdverseRoad CreateTwoWayRoad create_wave_lanes CreateRoadFrom create_bend_straight block_socket_merge Curve FirstPGBlock OutFork Fork InFork InterSection ParkingLot PGBlock PGBlockSocket InRampOnStraight Ramp OutRampOnStraight Roundabout StdInterSection StdTInterSection Straight TollGate TInterSection TollGateBuilding LinearVehicle DefensiveVehicle AggressiveVehicle IDMVehicle ControlledVehicle Vehicle AbstractLane ArgoverseLane CircularLane StraightLane WayPointLane ArgoverseMap BaseMap MapGenerateMethod parse_map_config NextStep CityMap CityBIG BigGenerateMethod PGMap Road RoadNetwork GraphLookupTable BaseStaticObject TrafficObject TrafficBarrier TrafficWarning TrafficCone BaseVehicleState BaseVehicle DefaultVehicle MVehicle SVehicle random_vehicle_type LVehicle XLVehicle OpponentModelPredictiveControl ModelPredictiveControl PhysicSetting BaseCamera DepthCamera SideDetector LaneLineDetector DistanceDetector Lidar MiniMap Navigation PIDController Target RGBCamera AssetLoader close_asset_loader initialize_asset_loader BaseEngine get_global_config engine_initialized initialize_engine set_global_random_seed get_engine close_engine get_object Interface VehiclePanel BaseRigidBodyNode SceneCull MainCamera collision_callback _free_warning _suppress_warning EngineCore ForceFPS ImageBuffer Light Controller KeyboardController SteeringWheelController ScreenMessage _load_shader_str OurPipeline PhysicsWorld SkyBox Terrain BasePGDriveEnv _act PGDriveEnv SafePGDriveEnv TopDownPGDriveEnvV2 TopDownSingleFramePGDriveEnv TopDownPGDriveEnv _draw MABottleneckMap _long_run _vis _vis_debug_respawn _expert _profile MultiAgentBottleneckEnv MultiAgentRoundaboutEnv MARoundaboutMap _draw LidarStateObservationMARound _long_run _vis _vis_debug_respawn _expert RoundaboutSpawnManager _profile InterectionSpawnManager _draw _long_run _vis _vis_debug_respawn _expert MultiAgentIntersectionEnv _profile MAIntersectionMap show_map_and_traj _draw _long_run _vis _vis_debug_respawn MAParkingLotMap _expert _profile MultiAgentParkingLotEnv ParkingLotSpawnManager TollGateStateObservation MATollGateMap _draw _vis _long_run _vis_debug_respawn _expert StayTimeManager _profile MultiAgentTollgateEnv TollGateObservation pygame_replay MultiAgentPGDrive _vis panda_replay _test get_terminal_state load_weights expert value DrivingCallbacks initialize_ray get_trainer evaluate AgentManager BaseManager MapManager TrafficObjectManager Recorder Replayer SpawnManager TrafficManager TrafficMode ImageObservation ImageStateObservation ObservationBase StateObservation LidarStateObservation TopDownObservation LaneGraphics ObservationWindow VehicleGraphics WorldSurface ObservationWindowMultiChannel TopDownMultiChannel draw_top_down_map PheromoneRenderer draw_top_down_trajectory TopDownRenderer AIProtectPolicy BasePolicy EnvInputPolicy FrontBackObjects IDMPolicy ManualControlPolicy TakeoverPolicy local_test_apply_action local_test_close_and_restart get_result ImageEncoder gen_video test_asset_loader test_config_unchangeable test_config_sync test_detector_mask_in_lidar _search_angle DetectorMask _test_mask _line_intersect test_cutils_lidar test_detector_mask test_fake_cutils test_utils _test_cutils test_cutils _act test_pgdrive_env_rgb _t test_build_city test_ma_bottleneck_env test_ma_bottleneck_no_short_episode test_randomize_spawn_place test_ma_bottleneck_horizon _check_spaces_before_reset _check_shape test_ma_bottleneck_reward_sign _act test_ma_bottleneck_reward_done_alignment test_ma_no_reset_error test_ma_bottleneck_init_space test_ma_bottleneck_close_spawn _check_space _check_spaces_after_reset test_ma_bottleneck_reset test_ma_bottleneck_horizon_termination test_ma_bottleneck_40_agent_reset_after_respawn test_ma_env_force_reset test_ma_intersection_reset test_ma_intersection_reward_sign test_ma_intersection_env test_ma_intersection_40_agent_reset_after_respawn test_randomize_spawn_place _check_spaces_before_reset _check_shape test_ma_intersection_init_space _act test_ma_no_reset_error test_ma_intersection_horizon _check_space _check_spaces_after_reset test_ma_intersection_close_spawn test_ma_intersection_reward_done_alignment test_ma_intersection_no_short_episode test_ma_intersection_horizon_termination test_ma_parking_lot_env test_randomize_spawn_place test_ma_parking_lot_close_spawn _check_spaces_before_reset _check_shape test_ma_parking_lot_init_space test_ma_parking_lot_horizon_termination _act test_ma_no_reset_error test_ma_parking_lot_reset test_ma_parking_lot_reward_done_alignment test_ma_parking_lot_no_short_episode test_ma_parking_lot_40_agent_reset_after_respawn _check_space _check_spaces_after_reset test_ma_parking_lot_horizon test_ma_roundabout_close_spawn test_randomize_spawn_place test_ma_roundabout_reset test_ma_roundabout_env test_ma_roundabout_40_agent_reset_after_respawn test_ma_roundabout_horizon _check_spaces_before_reset _check_shape test_ma_roundabout_reward_done_alignment test_ma_roundabout_init_space test_ma_roundabout_no_short_episode test_ma_roundabout_horizon_termination _act test_ma_no_reset_error test_ma_roundabout_reward_done_alignment_1 _check_space _check_spaces_after_reset test_ma_roundabout_reward_sign test_randomize_spawn_place test_ma_toll_reward_done_alignment_1 test_ma_toll_env _check_spaces_before_reset _check_shape test_ma_toll_init_space test_ma_toll_reward_done_alignment_2 _act test_ma_toll_close_spawn test_ma_no_reset_error test_ma_toll_reward_sign test_ma_toll_horizon_termination test_ma_toll_reset test_ma_toll_no_short_episode _check_space _check_spaces_after_reset test_ma_toll_horizon test_ma_toll_40_agent_reset_after_respawn test_naive_multi_agent_pgdrive _step _a test_safe_env test_top_down_rendering test_action_repeat_env _test_action_repeat test_change_density_env test_change_friction _run _test_remote_pgdrive_env test_collision_with_sidewalk test_collision_with_vehicle test_line_contact test_config_consistency test_config_consistency_2 _test_cull_scene test_destroy test_original_lidar test_lidar_with_mask _assert_vehicle test_base_vehicle _nan_speed _get_heading_deg test_episode_release test_expert_with_traffic _evaluate test_expert_without_traffic test_gen_map_read test_get_lane_index test_save_map_image test_loaded_map_alignment test_map_buffering test_infinite_agents test_delay_done test_respawn test_navigation Target test_partially_update test_recursive_config test_dict_as_attribute test_config_identical TestCollisionEnv test_object_collision_detection TestObsActionSpace _act test_obs_noise test_out_of_road useless_left_right_distance_printing test_random_vehicle_parameter test_random_traffic test_fixed_traffic test_random_lane_num test_seeding test_map_random_seeding test_random_lane_width test_traffic_respawn test_reward_cost_done test_save_episode test_save_episode _test_idm_policy_briefly _create_vehicle _test_idm_policy_is_moving vis_big TestBlock ArgoverseEnv TestEnv _t get_image vis_top_down_render_with_panda_render vis_installation capture_image TestMemoryLeakEnv Config merge_config_with_unknown_keys filter_none merge_config config_to_dict _check_keys _recursive_check_keys _is_identical pgdrive_heading pgdrive_position panda_heading panda_position _get_fake_cutils import_cutils draw_top_down_map generate_maps point_in_rotated_rectangle get_vertical_vector get_boxes_bounding_box distance_greater dot3 clip get_points_bounding_box Vector rotated_rectangles_intersect point_distance safe_clip_for_small_array do_every norm safe_clip time_me point_in_rectangle dot wrap_to_pi has_corner_inside not_zero hash_seed random_string _int_list_from_bigint _bigint_from_bytes create_seed get_np_random check_lane_on_road is_following_lane_index get_lanes_on_road circle_region_detection ray_localization get_road_bounding_box get_straight_contour rect_region_detection is_same_lane_index get_curve_contour generate_invisible_static_wall get_all_lanes block_socket_merge Parameter ParameterSpace VehicleParameterSpace Box BlockParameterSpace Discrete Space Dict merge_dicts get_object_from_node is_mac concat_step_infos setup_logger auto_termination is_win import_pygame _deep_update deprecation_warning recursive_equal allOn allOff bit bit Sidewalk CONTINUOUS ContinuousLaneLine BrokenLaneLine ParameterSpace ParameterSpace BOTTLENECK_PARAMETER get_vertical_vector asarray arctan length StraightLane pi position direction_lateral CircularLane end_node deepcopy isinstance SIDEWALK_WIDTH length insert update_properties SIDEWALK_LINE_DIST start_node reverse position append width_at add_lane range radius deepcopy length end update_properties position center int isinstance StraightLane get_lanes_on_road start_phase radius forbidden length speed_limit CircularLane end_phase line_types priority position width_at CreateRoadFrom len center isinstance StraightLane get_lanes_on_road start_phase radius forbidden length speed_limit CircularLane end_phase line_types priority position width_at CreateRoadFrom len pop end_node reset_start_end arctan length pi sin position create_bend_straight ParameterSpace CURVE ParameterSpace ParameterSpace FORK_PARAMETER ParameterSpace INTERSECTION PARKING_LOT_PARAMETER ParameterSpace deg2rad ParameterSpace RAMP_PARAMETER ROUNDABOUT ParameterSpace ParameterSpace STRAIGHT ParameterSpace BOTTLENECK_PARAMETER T_INTERSECTION ParameterSpace LENGTH array pi AGMap update BLOCK_NUM BLOCK_SEQUENCE isinstance BLOCK_NUM BLOCK_SEQUENCE SINGLE_BLOCK Traffic_object ParameterSpace BASE_VEHICLE Vehicle ParameterSpace DEFAULT_VEHICLE ParameterSpace XL_VEHICLE ParameterSpace L_VEHICLE M_VEHICLE ParameterSpace ParameterSpace S_VEHICLE DepthCam RgbCam format warning initialized asset_path init_loader cls close get_engine seed get_engine warning PARA_VIS get_object_from_node hasPythonTag Vehicle getNode1 getNode0 COST_ONCE getName range loadPrcFileData loadPrcFileData format loadPrcFileData _add_shader_defines file_path step show close imshow reset current_map draw_top_down_map MultiAgentBottleneckEnv update list format print next_agent_count close reset sample step MultiAgentBottleneckEnv range values list format print next_agent_count close render reset step MultiAgentBottleneckEnv range values object_to_agent list format name print dist_to_left_side next_agent_count close render reset dist_to_right_side step MultiAgentBottleneckEnv range values time format all print reset sample step MultiAgentBottleneckEnv range values items list format vehicles print keys reset any sample step MultiAgentBottleneckEnv range values len MultiAgentRoundaboutEnv MultiAgentRoundaboutEnv MultiAgentRoundaboutEnv MultiAgentRoundaboutEnv MultiAgentRoundaboutEnv MultiAgentRoundaboutEnv MultiAgentIntersectionEnv MultiAgentIntersectionEnv MultiAgentIntersectionEnv MultiAgentIntersectionEnv MultiAgentIntersectionEnv MultiAgentIntersectionEnv show pixels_red close imshow MultiAgentIntersectionEnv current_map reset draw_top_down_trajectory resize save draw_top_down_map MultiAgentParkingLotEnv MultiAgentParkingLotEnv MultiAgentParkingLotEnv items _target_checkpoints_index current_road current_track_vehicle MultiAgentParkingLotEnv MultiAgentParkingLotEnv MultiAgentParkingLotEnv MultiAgentTollgateEnv MultiAgentTollgateEnv MultiAgentTollgateEnv lane_index block_ID MultiAgentTollgateEnv overspeed MultiAgentTollgateEnv MultiAgentTollgateEnv update list MultiAgentPGDrive print close setup_logger render reset step range values MultiAgentPGDrive setup_logger _runtime deepcopy toggle format close env_class render reset set_follow_lane save sample step update deepcopy _runtime toggle format close env_class reset set_follow_lane save sample step load tanh normal exp reshape matmul split load reshape tanh matmul pop print init update restore dict expanduser PPOTrainer workers split_by_episode time format print ParallelRollouts extend dict next MAX_WIDTH MAX_LENGTH simple_draw get_bounding_box Surface list pixels_red display __init__ keys WorldSurface resize move_display_window_to max pop items list sort circle add set dict enumerate values pos2pix pi PGDriveEnv close reset step range PGDriveEnv close reset step range max time format asarray print speed reset array position step heading_theta range format print close capture_frame shape ImageEncoder EngineCore initialize_asset_loader clear_world print file_path asset_path default_config Config global_config PGDriveEnv config update close reset recursive_equal cos sin arange pi _line_intersect range append get_mask pi deg2rad update_mask stack print _test_mask DetectorMask choice get_surrounding_vehicles get_vehicle_num update_mask position max WIDTH name perceive DetectorMask sum range PGDriveEnv format astype stack assert_almost_equal heading_theta deepcopy vehicles vehicle print get_mask reset num_lasers LENGTH step array PGDriveEnv deepcopy vehicle ones heading_theta perceive dynamic_world lidar reset _get_fake_cutils assert_almost_equal position fake_cutils_perceive _old_perceive step array range normal cutils_norm cutils_clip cutils_panda_position range _test_cutils import_cutils _test_cutils import_cutils normal array range PGDriveEnv dict _act reset sample update format initialize_engine CityMap dict save draw_top_down_map default_config _t keys _check_space set _check_shape keys set keys _check_space set shape list items _check_shape _check_spaces_before_reset _act reset _check_spaces_after_reset range items list format print _check_spaces_before_reset set keys _act difference reset any _check_spaces_after_reset MultiAgentBottleneckEnv range values items list format print end _check_spaces_before_reset set_position _act reset local_coordinates position _check_spaces_after_reset assert_almost_equal MultiAgentBottleneckEnv range values seed vehicles format print _check_spaces_before_reset reset _no_close_spawn _check_spaces_after_reset step MultiAgentBottleneckEnv range items list set_position _check_spaces_before_reset end _act reset position _check_spaces_after_reset set_static MultiAgentBottleneckEnv range format TestEnv print _respawn_count _check_spaces_before_reset reset any _check_spaces_after_reset step range values print _check_spaces_before_reset close dict reset _check_spaces_after_reset MultiAgentBottleneckEnv items time list format print _check_spaces_before_reset union set keys _act reset _check_spaces_after_reset MultiAgentBottleneckEnv range clear items list print _check_spaces_before_reset set keys _act add reset _check_spaces_after_reset set_static MultiAgentBottleneckEnv range list _check_spaces_before_reset check_pos reset finish _check_spaces_after_reset step MultiAgentBottleneckEnv range values list _check_spaces_before_reset check_pos reset _check_spaces_after_reset step MultiAgentBottleneckEnv range values items list reset step MultiAgentBottleneckEnv range MultiAgentRoundaboutEnv deepcopy close_and_reset_num_agents close reset _check_spaces_before_reset _act reset _check_spaces_after_reset range items list format print _check_spaces_before_reset set _act difference MultiAgentIntersectionEnv reset any _check_spaces_after_reset keys range values items list format print end _check_spaces_before_reset set_position _act MultiAgentIntersectionEnv reset local_coordinates position _check_spaces_after_reset assert_almost_equal range values seed vehicles format print _check_spaces_before_reset MultiAgentIntersectionEnv reset _no_close_spawn _check_spaces_after_reset step range items list set_position _check_spaces_before_reset end _act MultiAgentIntersectionEnv reset position _check_spaces_after_reset set_static range format TestEnv print _respawn_count _check_spaces_before_reset reset any _check_spaces_after_reset step range values print _check_spaces_before_reset close dict MultiAgentIntersectionEnv reset _check_spaces_after_reset items time list format print _check_spaces_before_reset set keys _act MultiAgentIntersectionEnv reset _check_spaces_after_reset union range clear items list print _check_spaces_before_reset set _act add MultiAgentIntersectionEnv reset _check_spaces_after_reset set_static keys range list _check_spaces_before_reset check_pos MultiAgentIntersectionEnv reset finish _check_spaces_after_reset step range values MultiAgentIntersectionEnv MultiAgentIntersectionEnv _check_spaces_before_reset _act reset _check_spaces_after_reset range items list format print _check_spaces_before_reset _check_spaces_after_reset set _act difference reset any MultiAgentParkingLotEnv keys range values items list format print end _check_spaces_before_reset set_position _check_spaces_after_reset _act reset local_coordinates position MultiAgentParkingLotEnv assert_almost_equal range values seed vehicles format print _check_spaces_before_reset _check_spaces_after_reset reset _no_close_spawn MultiAgentParkingLotEnv step range items list set_position _check_spaces_before_reset end _check_spaces_after_reset _act reset position MultiAgentParkingLotEnv set_static range print _check_spaces_before_reset close MultiAgentParkingLotEnv dict reset _check_spaces_after_reset items time list format print _check_spaces_before_reset _check_spaces_after_reset set keys _act reset MultiAgentParkingLotEnv union range clear items list print _check_spaces_before_reset _check_spaces_after_reset set _act add reset MultiAgentParkingLotEnv set_static keys range list _check_spaces_before_reset _check_spaces_after_reset check_pos reset finish MultiAgentParkingLotEnv step range values MultiAgentParkingLotEnv MultiAgentParkingLotEnv _check_spaces_before_reset _act reset _check_spaces_after_reset range MultiAgentRoundaboutEnv list format items print _check_spaces_before_reset set _act difference reset any _check_spaces_after_reset keys range values MultiAgentRoundaboutEnv list items format print end _check_spaces_before_reset set_position _act reset local_coordinates position _check_spaces_after_reset assert_almost_equal range values seed MultiAgentRoundaboutEnv vehicles format print _check_spaces_before_reset reset _no_close_spawn _check_spaces_after_reset step range MultiAgentRoundaboutEnv list items _check_spaces_before_reset _act reset _check_spaces_after_reset range MultiAgentRoundaboutEnv list items set_position _check_spaces_before_reset end _act reset position _check_spaces_after_reset set_static range format TestEnv print _respawn_count _check_spaces_before_reset reset any _check_spaces_after_reset step range values MultiAgentRoundaboutEnv print _check_spaces_before_reset close dict reset _check_spaces_after_reset MultiAgentRoundaboutEnv time list items format print _check_spaces_before_reset set keys _act reset _check_spaces_after_reset union range clear MultiAgentRoundaboutEnv list items print _check_spaces_before_reset set _act add reset _check_spaces_after_reset set_static keys range MultiAgentRoundaboutEnv list _check_spaces_before_reset check_pos reset finish _check_spaces_after_reset step range values MultiAgentRoundaboutEnv MultiAgentRoundaboutEnv _check_spaces_before_reset _act reset _check_spaces_after_reset range items list format print _check_spaces_before_reset set _act difference reset any _check_spaces_after_reset MultiAgentTollgateEnv keys range values items list format print end _check_spaces_before_reset set_position _act reset local_coordinates position _check_spaces_after_reset MultiAgentTollgateEnv assert_almost_equal range values seed vehicles format print _check_spaces_before_reset reset _no_close_spawn _check_spaces_after_reset MultiAgentTollgateEnv step range items list set_position _check_spaces_before_reset _act reset position _check_spaces_after_reset MultiAgentTollgateEnv range items list set_position _check_spaces_before_reset end set_static _act reset _check_spaces_after_reset MultiAgentTollgateEnv range format TestEnv print _respawn_count _check_spaces_before_reset reset any _check_spaces_after_reset step range values print _check_spaces_before_reset close dict reset _check_spaces_after_reset MultiAgentTollgateEnv items time list format print _check_spaces_before_reset set keys _act reset _check_spaces_after_reset MultiAgentTollgateEnv union range clear items list print _check_spaces_before_reset set set_static _act add reset _check_spaces_after_reset MultiAgentTollgateEnv keys range list _check_spaces_before_reset check_pos reset finish _check_spaces_after_reset MultiAgentTollgateEnv step range values MultiAgentTollgateEnv MultiAgentTollgateEnv step sample reset range _a seed MultiAgentPGDrive reset _step sample step range SafePGDriveEnv print close reset step range reset range step ActionRepeat reset sample step range dict _test_action_repeat reset array range step _run ChangeFrictionEnv ChangeDensityEnv _run print PGDriveEnv crash_vehicle reset step range PGDriveEnv reset crash_sidewalk step range PGDriveEnv reset range step PGDriveEnv reset PGDriveEnv reset pop TestCull reset any step range values PGDriveEnv close reset step range DefaultVehicle PGDriveEnv isinstance print detected_objects setup_logger reset step range DefaultVehicle PGDriveEnv isinstance print detected_objects setup_logger reset step range heading_diff norm current_road speed lane velocity_direction position assert_almost_equal abs _set_incremental_action _assert_vehicle current_map position update_map_info get_state _set_action spawn_object add_navigation _nan_speed after_step update PGDriveEnv set_force_calculate_lane_index projection _get_heading_deg set_position engine set_state assert_almost_equal heading_theta destroy reset before_step reset step SafePGDriveEnv reset range step seed PGDriveEnv time sum print close reset append step expert len dict _evaluate dict _evaluate PGDriveEnv print lazy_init close setup_logger dump_all_maps any reset save_map recursive_equal read_all_maps range PGDriveEnv vehicles get_closest_lane_index index reset lane_index position step range PGDriveEnv format close setup_logger dict imshow reset current_map savefig draw_top_down_map range makedirs PGDriveEnv print generate_maps close copy reset save_map recursive_equal PGDriveEnv reset range seed MultiAgentRoundaboutEnv vehicles list items format print reset step max range len MultiAgentRoundaboutEnv format all print reset append step range values pop MultiAgentRoundaboutEnv get format list items clear print set add reset step keys range clear PGDriveEnv accept PIDController print speed step close lateral faster go_right reset render Target go_left slower range get_result Config update Config update Config update Config update TestCollisionEnv isinstance detected_objects spawn_object render reset step range PGDriveEnv assert_equal _act _add_noise_to_cloud_points reset sample assert_almost_equal array cloud_points PGDriveEnv format WIDTH print min dict sqrt reset LENGTH step range PGDriveEnv format vehicle print get_current_lane_num range dict get_current_lane_width float step clip seed PGDriveEnv reset PGDriveEnv reset save_map append recursive_equal enumerate PGDriveEnv reset range PGDriveEnv reset range PGDriveEnv reset width PGDriveEnv reset close get_current_lane_num get_config PGDriveEnv reset PGDriveEnv vehicles discard list traffic_vehicles setup_logger set reset step range PGDriveEnv format print copy dict reset step range MultiAgentRoundaboutEnv dump_episode setup_logger dict render reset step range PGDriveEnv update DefaultVehicle Config initialize_engine PGDriveEnv IDMPolicy traffic_vehicles reset before_step sample register_new_policy step after_step update PGDriveEnv traffic_vehicles reset sample step array range initialize_asset_loader TestBlock set_global_random_seed render BIG setPos RoadNetwork world run show yticks set_global_random_seed imshow title xticks screenshot save_image time format print close setup_logger render reset sample step range TopDownSingleFramePGDriveEnv show PGDriveEnv remove print PNMImage write close getScreenshot dict reset step range open PGDriveEnv close reset loadPrcFileData capture_image step range merge_dicts get_dict isinstance get_dict isinstance set items list isinstance zip _check_keys items list isinstance tolist dict get_dict list keys append items list pop print dump_all_maps close env_class nan_to_num float astype list clip isnan range len norm dot array transpose array max min array max min inf seed RandomState hash_seed create_seed _int_list_from_bigint digest create_seed encode urandom _bigint_from_bytes isinstance int format len unpack enumerate append divmod uuid4 str format int list items graph length get_road_bounding_box local_coordinates position width_at range enumerate append length width position center start_phase pi width append range array direction radius graph items list append get_object_from_node norm rayTestAll sorted panda_position argmin cos getNode hasHits getHits local_coordinates append heading_at sin panda_position sweep_test_closest Vec3 panda_heading makePosHpr BulletBoxShape BulletCylinderShape makePos sweep_test_closest panda_position setStatic setIntoCollideMask BaseRigidBodyNode setKinematic Vec3 setActive addShape BulletBoxShape dict basicConfig isinstance get_dict append range len dict merge_dicts deepcopy _deep_update items list format warning getLogger base_object_name isinstance | <img align=right width=250px src="pgdrive/assets/PGDrive.png" /> # PGDrive: an open-ended driving simulator with infinite scenes [](http://github.com/decisionforce/pgdrive/actions) [](https://codecov.io/gh/decisionforce/pgdrive) [](https://pgdrive.readthedocs.io) [](https://github.com/decisionforce/pgdrive/blob/main/LICENSE.txt) [](https://github.com/decisionforce/pgdrive/stargazers) **This project is deprecated and merged into [MetaDrive](https://github.com/decisionforce/metadrive). Please follow the MetaDrive repo for the latest development and maintenance.** **[ 📺 [Website](https://decisionforce.github.io/pgdrive/) | 🏗 [Github Repo](https://github.com/decisionforce/pgdrive) | 📜 [Documentation](https://pgdrive.readthedocs.io/) | 🎓 [Paper](https://arxiv.org/pdf/2012.13681) ]** Welcome to PGDrive! PGDrive is an driving simulator with many key features, including: | 1,871 |
decisionforce/pgdrive-generalization-paper | ['autonomous driving'] | ['Improving the Generalization of End-to-End Driving through Procedural Generation'] | eval_friction.py draw_single_block_results.py train_ppo.py draw_ppo_results.py eval_density.py draw_sac_results.py train_ppo_change_density.py train_ppo_change_friction_10.py utils.py train_ppo_single_block_agent.py train_sac.py get_termination parse smooth filter_nan _parse _flatten_dict get_trainer evaluate get_result get_trainer evaluate get_result DrivingCallbacks initialize_ray train get_train_parser update deepcopy list items isinstance any append DataFrame sorted print concat _parse append enumerate int list format reset_index isinstance Number print min logical_and copy keys linspace append DataFrame max items list concat smooth copy append sort_values update restore dict expanduser PPOTrainer workers split_by_episode time format print ParallelRollouts extend dict next join format replace evaluate cleanup print sort DataFrame dict get_trainer abspath append listdir pop print init add_argument ArgumentParser get update deepcopy format print initialize_ray CLIReporter copy add_metric_column run | **Status:** Archive - code is provided as-is, no updates expected. # Improving the Generalization of End-to-End Driving through Procedural Generation This is the official material of the paper: "Improving the Generalization of End-to-End Driving through Procedural Generation". Please visit the following links to learn more on our PGDrive simulator: **[ 📺 [Website](https://decisionforce.github.io/pgdrive/) | 🏗 [PGDrive Repo](https://github.com/decisionforce/pgdrive) | 📜 [Documentation](https://pgdrive.readthedocs.io/) | 🎓 [Paper](https://arxiv.org/pdf/2012.13681) ]** ## Setup the environment ```bash # Clone this repo to local git clone https://github.com/decisionforce/pgdrive-generalization-paper.git cd pgdrive-generalization-paper | 1,872 |
dedekinds/The-Color-Transfer-of-Animes-Characters-Images | ['style transfer'] | ['Deep Photo Style Transfer'] | pre_color_model.py get_pre_coloring.py photo_style.py deep_photostyle.py closed_form_matting.py base_model.py smooth_local_affine.py vgg19/vgg.py rot_flop png2jpg_background_white getLaplacian getlaplacian1 main rot_flop png2jpg_background_white affine_loss stylize load_seg rgb2bgr save_result print_loss bgr2rgb gram_matrix content_loss total_variation_loss style_loss rot_flop smooth_local_affine Vgg19 chdir new putpixel array range open system broadcast_to list csr_matrix transpose identity matmul shape sum diags range grey_erosion int T reshape inv repeat zeros ravel array shape transpose tocoo max_iter fromarray stylize uint8 join serial init_image_path transpose convert ascontiguousarray shape save output_image array clip constant _extract_mask resize append expand_dims array range len reshape transpose matmul as_list constant format squared_difference print multiply resize_bilinear greater avg_pool gram_matrix pad reduce_mean cond zip append float range len get_shape reduce_sum reshape transpose unstack fromarray uint8 clip save max_iter join serial format print save_result enumerate load_seg getLaplacian Session run to_float max_iter ScipyOptimizerInterface sparse_tensor_dense_matmul conv4_2 squeeze style_weight transpose save_result apply_gradients unstack content_loss expand_dims range style_loss serial affine_loss format partial lbfgs rgb2bgr astype stack compute_gradients ConfigProto float tv_weight enumerate join time constant deepcopy minimize print Variable reshape convert float32 AdamOptimizer style_seg_path affine_weight content_seg_path total_variation_loss global_variables_initializer array content_weight _best_local_affine_kernel InOut float32 shape int32 _reconstruction_best_kernel zeros _bilateral_smooth_kernel SourceModule get_function | # The Color Transfer of Animes Characters' Images:Pokémon Fusion Example The final project of Advance Machine Learning course in Tsinghua University. Two contributors come from Tsinghua University [@dedekinds](https://github.com/dedekinds) and Northwestern University, U.S. [@wuyujack](https://wuyujack.github.io/). This project aims to improve the images' quality after color transferring in low quality pictures. You can refer to the `Data` section to learn more about our test images. ## License *MIT license* ## Setup ### Dependencies * [Tensorflow](https://www.tensorflow.org/) * [Numpy](www.numpy.org/) * [Pillow](https://pypi.python.org/pypi/Pillow/) | 1,873 |
deepBear6/StyleTransfer | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | StyleTransfer.py get_model load_images gram_matrix get_activation save_image vgg19 isinstance AvgPool2d Sequential Normalization MaxPool2d add_module Conv2d requires_grad_ parameters eval ReLU features cuda requires_grad_ Compose cuda open size view children layer enumerate imwrite ToPILImage tfm squeeze cpu | # Implementing the Gatys et al. paper on Style transfer (https://arxiv.org/abs/1508.06576) Here im using the Adam optimizer instead of the LBFGS optimizer. # Results    | 1,874 |
deepakbaby/isegan | ['speech enhancement'] | ['iSEGAN: Improved Speech Enhancement Generative Adversarial Networks'] | preprocessing.py run_isegan.py models.py keras_contrib_backend.py prepare_data.py normalizations.py spectralnorms.py file_ops.py data_loader.py data_ops.py pre_emph de_emph read_and_decode pre_emph data_preprocess de_emph reconstruct_wav get_modeldirname write_log _preprocess_padding extract_image_patches _preprocess_conv2d_input conv2d _postprocess_conv2d_output moments clip depth_to_space generator discriminatorSN discriminator GroupNormalization BatchRenormalization InstanceNormalization read_and_slice1d prepare_sliced_data1d slice_1dsignal DenseSN ConvSN1D _ConvSN ConvSN2D ConvSN3D ConvSN2DTranspose EmbeddingSN reshape concat zeros range read TFRecordReader decode_raw float32 pre_emph set_shape int32 cast parse_single_example int ones ceil zeros float range print expand_dims concatenate pre_emph astype Summary zip add add_summary flush transpose cast transpose cast transpose cast _preprocess_padding reshape transpose permute_dimensions int_shape _preprocess_conv2d_input lower image_data_format _postprocess_conv2d_output base_dtype _to_tensor inf maximum as_list int T concatenate set_weights len Model set_shape summary append zeros expand_dims Input enumerate T concatenate print relu set_weights Model summary zeros expand_dims Input enumerate T concatenate print relu set_weights Model summary zeros expand_dims Input enumerate int concatenate reshape array range read slice_1dsignal str join print read_and_slice1d vstack append array enumerate len | # Improved SEGAN Tricks to improve [SEGAN](https://github.com/santi-pdp/segan) performance. Eveything is re-implemented into Keras with Tensorflow backend. Supporting document with evaluation results and other details can be found [here](https://arxiv.org/pdf/2002.08796.pdf). **Deepak Baby, _iSEGAN: Improved Speech Enhancement Generative Adversarial Networks_, Arxiv preprint, 2020.** ---- ### Pre-requisites 1. Install [tensorflow](https://www.tensorflow.org/) (tested on Tensorflow v1.13.2) and [keras](https://keras.io/) (tested on Keras v2.3.1) 1. Install [tqdm](https://pypi.org/project/tqdm/) for profiling the training progress 1. The experiments are conducted on a dataset from Valentini et. al., and are downloaded from [here](https://datashare.is.ed.ac.uk/handle/10283/2791). The following script can be used to download the dataset. *Requires [sox](http://sox.sourceforge.net/) for converting to 16kHz*. ```bash | 1,875 |
deepdeepdot/FastPhotoStyle | ['image stylization'] | ['A Closed-form Solution to Photorealistic Image Stylization'] | demo.py photo_smooth.py models.py smooth_filter.py photo_wct.py process_stylization_examples.py VGGEncoder1 VGGEncoder2 VGGEncoder4 VGGDecoder2 VGGDecoder4 VGGEncoder3 VGGDecoder3 VGGDecoder1 Propagator PhotoWCT smooth_local_affine smooth_filter load _best_local_affine_kernel bytes Module namedtuple Stream numpy Program _reconstruction_best_kernel encode _bilateral_smooth_kernel cuda compile get_function fromarray uint8 transpose convert ascontiguousarray shape resize smooth_local_affine array clip | ## FastPhotoStyle ### License Copyright (C) 2018 NVIDIA Corporation. All rights reserved. Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode). ### About This code repository contains an implementation of our fast photorealistic style transfer algorithm. Given a content photo and a style photo, the code can transfer the style of the style photo to the content photo. The details of the algorithm behind the code is documented in our arxiv paper. Please cite the paper if this code repository is used in your publications. [Yijun Li](https://sites.google.com/site/yijunlimaverick/), [Ming-Yu Liu](http://mingyuliu.net/), [Xueting Li](https://sunshineatnoon.github.io/), [Ming-Hsuan Yang](http://faculty.ucmerced.edu/mhyang/), [Jan Kautz](http://jankautz.com/) "[A Closed-form Solution to Photorealistic Image Stylization](https://arxiv.org/abs/1802.06474)" arXiv preprint arXiv:1802.06474  ### Code usage Please check out the [user manual page](USAGE.md). | 1,876 |
deepinx/ShuffleNet_v1_and_v2 | ['face verification'] | ['MobileFaceNets: Efficient CNNs for Accurate Real-Time Face Verification on Mobile Devices'] | symbol.py test_mnist.py plot_netgraph.py symbol_utils.py fmobilefacenet.py fshufflenetv2.py DResidual Residual ConvOnly get_mobilefacenet Act Conv Linear get_shufflenet_v2 concat_shuffle_split channel_shuffle basic_unit make_stage Activation basic_unit_with_downsampling get_shufflenet shuffleUnit channel_shuffle combine make_stage get_fc1 get_head Act Conv residual_unit_v3 Linear LeakyReLU Convolution Act BatchNorm Convolution BatchNorm Convolution Conv Linear DResidual range get_fc1 DResidual Residual SoftmaxOutput Variable Conv concat channel_shuffle split reshape swapaxes LeakyReLU Convolution Activation BatchNorm Convolution Activation BatchNorm concat concat_shuffle_split channel_shuffle basic_unit range basic_unit_with_downsampling get_fc1 SoftmaxOutput Variable Convolution make_stage Pooling Convolution channel_shuffle combine BatchNorm Activation shuffleUnit Pooling var FullyConnected SoftmaxOutput Convolution flatten make_stage Pooling broadcast_div FullyConnected print Convolution Act Linear mean sqrt tile BatchNorm sum Flatten Dropout get _set_attr Act Conv BatchNorm Pooling Act min Conv BatchNorm residual_unit_v3 | # The MXNet Implementation of ShuffleNet v1 and v2
This repository includes codes for ShuffleNet v1 and v2. In addition, MobileFaceNet, which is an efficient mobile CNN for face verification introduced in [arxiv](https://arxiv.org/abs/1804.07573) is also included in this repository.
## Environment
This repository has been tested under the following environment:
- Python 2.7
- Ubuntu 18.04
| 1,877 |
deepinx/shufflenet-v1-and-v2 | ['face verification'] | ['MobileFaceNets: Efficient CNNs for Accurate Real-Time Face Verification on Mobile Devices'] | symbol.py test_mnist.py plot_netgraph.py symbol_utils.py fmobilefacenet.py fshufflenetv2.py DResidual Residual ConvOnly get_mobilefacenet Act Conv Linear get_shufflenet_v2 concat_shuffle_split channel_shuffle basic_unit make_stage Activation basic_unit_with_downsampling get_shufflenet shuffleUnit channel_shuffle combine make_stage get_fc1 get_head Act Conv residual_unit_v3 Linear LeakyReLU Convolution Act BatchNorm Convolution BatchNorm Convolution Conv Linear DResidual range get_fc1 DResidual Residual SoftmaxOutput Variable Conv concat channel_shuffle split reshape swapaxes LeakyReLU Convolution Activation BatchNorm Convolution Activation BatchNorm concat concat_shuffle_split channel_shuffle basic_unit range basic_unit_with_downsampling get_fc1 SoftmaxOutput Variable Convolution make_stage Pooling Convolution channel_shuffle combine BatchNorm Activation shuffleUnit Pooling var FullyConnected SoftmaxOutput Convolution flatten make_stage Pooling broadcast_div FullyConnected print Convolution Act Linear mean sqrt tile BatchNorm sum Flatten Dropout get _set_attr Act Conv BatchNorm Pooling Act min Conv BatchNorm residual_unit_v3 | # The MXNet Implementation of ShuffleNet v1 and v2
This repository includes codes for ShuffleNet v1 and v2. In addition, MobileFaceNet, which is an efficient mobile CNN for face verification introduced in [arxiv](https://arxiv.org/abs/1804.07573) is also included in this repository.
## Environment
This repository has been tested under the following environment:
- Python 2.7
- Ubuntu 18.04
| 1,878 |
deepmind/interval-bound-propagation | ['text classification', 'data augmentation'] | ['Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation'] | interval_bound_propagation/src/loss.py interval_bound_propagation/src/crown.py interval_bound_propagation/tests/simplex_bounds_test.py interval_bound_propagation/tests/bounds_test.py interval_bound_propagation/tests/attacks_test.py interval_bound_propagation/__init__.py interval_bound_propagation/tests/model_test.py examples/language/exhaustive_verification.py examples/eval.py interval_bound_propagation/src/__init__.py interval_bound_propagation/tests/loss_test.py examples/language/models.py examples/language/utils.py examples/language/interactive_example.py interval_bound_propagation/src/model.py interval_bound_propagation/src/specification.py interval_bound_propagation/src/layer_utils.py interval_bound_propagation/tests/fastlin_test.py interval_bound_propagation/src/verifiable_wrapper.py interval_bound_propagation/tests/relative_bounds_test.py examples/train.py interval_bound_propagation/src/utils.py interval_bound_propagation/src/layers.py interval_bound_propagation/tests/specification_test.py interval_bound_propagation/src/relative_bounds.py interval_bound_propagation/src/fastlin.py examples/language/robust_model.py interval_bound_propagation/src/bounds.py interval_bound_propagation/src/simplex_bounds.py interval_bound_propagation/tests/layers_test.py examples/language/config.py setup.py interval_bound_propagation/tests/crown_test.py interval_bound_propagation/src/attacks.py examples/language/robust_train.py ibp_test_suite main layers show_metrics main layers show_metrics get_config verify_exhaustively expand_by_one_perturbation find_up_to_depth_k_perturbations load_synonyms verify_dataset load_dataset example main remove_duplicates InteractiveSentimentPredictor _max_pool_1d SentenceRepresenterConv RobustModel GeneratedDataset parse verifiable_objective targeted_objective filter_correct_class _pad_fixed construct_synonyms train write_tf_summary linear_schedule load_synonyms main config_train_summary analysis EmbedAndPad get_padded_indexes get_merged_vocabulary_file get_padded_embeddings get_accuracy get_num_correct_predictions UnrolledSPSA UntargetedTop5PGDAttack RestartedAttack pgd_attack MemoryEfficientMultiTargetedPGDAttack _topk_greater wrap_optimizer MultiTargetedPGDAttack _project_perturbation UntargetedAdaptivePGDAttack UnrolledOptimizer UnrolledSPSAFGSMDescent UnrolledSPSAAdam UntargetedPGDAttack PGDAttack UnrolledGradientDescent _maximize_margin _spsa_gradients UnrolledFGSMDescent Attack _any_greater _is_spsa_optimizer _maximize_topk_hinge_margin UnrolledAdam UnrolledSPSAGradientDescent IntervalBounds AbstractBounds create_initial_backward_bounds VerifiableModelWrapper create_classification_losses Losses BackwardBounds SymbolicBounds RelativeSymbolicBounds BatchNorm ImageNorm _materialise_conv2d conv_output_shape decode_batchnorm materialise_conv _materialise_conv1d combine_with_batchnorm Losses _create_linear_initializer VerifiableModelWrapper StandardModelWrapper DNN _create_conv2d_initializer _activation_bounds _maxpool_bounds RelativeIntervalBounds _simplex_bounds SimplexBounds RandomClassificationSpecification Specification LinearSpecification ClassificationSpecification LeastLikelyClassificationSpecification TargetedClassificationSpecification _maximize_cross_entropy get_attack_builder _get_projection _get_random_class build_loss_schedule create_specification smooth_schedule _all_smaller _minimize_margin build_dataset add_image_normalization create_attack _minimize_cross_entropy parse_learning_rate linear_schedule _get_least_likely_class _maximize_margin randomize create_classification_losses _change_parameters ConstWrapper LinearConv2dWrapper LinearConv1dWrapper BatchFlattenWrapper SoftmaxWrapper VerifiableWrapper LinearFCWrapper LinearConvWrapper IncreasingMonotonicWrapper ModelInputWrapper BatchNormWrapper ImageNormWrapper BatchReshapeWrapper PiecewiseMonotonicWrapper AttacksTest MockWithoutIsTraining MockWithIsTraining IntervalBoundsTest CROWNBoundsTest _generate_identity_spec SymbolicBoundsTest LayersTest _get_inputs FixedNN LossTest _build_model ModelTest _materialised_conv_bounds RelativeIntervalBoundsTest _materialised_conv_simplex_bounds SimplexBoundsTest _build_classification_specification SpecificationTest _build_spec_input discover TestLoader format print attack_accuracy crown_ibp_verified_accuracy verified_accuracy nominal_accuracy UntargetedPGDAttack layers model latest_checkpoint batch_size get_test_metrics ConfigProto model_dir VerifiableModelWrapper Saver load_data DNN info dataset add_image_normalization get_variables image loss_helper output_dir get_collection _replace getattr append build_dataset epsilon_train get_or_create_global_step parse_learning_rate FileWriter upper label merge join learning_rate model_wrapper AdamOptimizer UPDATE_OPS create_classification_losses _fields scalar update defaultdict load str format get_next info append zip copy enumerate remove_duplicates extend expand_by_one_perturbation set join debug_mode tolist any find_up_to_depth_k_perturbations batch_predict_sentiment info append len join verify_exhaustively debug_mode InteractiveSentimentPredictor skip_batches tqdm mean truncated_len info append enumerate len sorted extend Counter expand_by_one_perturbation tqdm pprint find_up_to_depth_k_perturbations len str checkpoint_path get_config character_level delta verify_dataset load_synonyms load_dataset mode as_list concat set_shape maximum uint8 decode_raw dense_to_sparse set_shape PY3 vectorize SparseTensor upper_offset output_module lower_offset maximum reduce_sum w targeted_objective b filter_correct_class expand_dims gather transpose ones_like not_equal where expand_dims range list cumsum keys load_synonyms max float float32 scalar merge add_summary Summary add graph_tensor_producer RobustModel construct_synonyms Saver MakeDirs variables config_train_summary config graph_tensor_producer RobustModel construct_synonyms Saver info variables reset_default_graph tensorboard_dir experiment_root train analysis value reshape concat sparse_to_dense indices lookup stack cast int32 tile set_shape gather expand_dims range values lookup set_shape sparse_to_dense values float64 reduce_sum int64 cast argmax equal join sorted name write close NamedTemporaryFile info union len namedtuple namedtuple while_loop clip_by_value init_state while_loop float32 flatten shape project_perturbation cast random_uniform len reduce_max values values input_bounds one_hot isinstance convert LinearSpecification ClassificationSpecification num_specifications zeros expand_dims LinearExpression _is_loss_active build_loss_schedule zeros conv1d convolution value reshape transpose eye convolution value reshape transpose conv1d eye rsqrt isinstance squeeze variance bias mean moving_variance _eps moving_mean BatchNorm scale gamma beta epsilon zeros decode_batchnorm reduce_max nl_fun reduce_max reduce_min from_tensor_slices Sample astype float32 shuffle int64 expand_dims cast int cast float32 linear_schedule startswith output_size _get_schedule create_specification smooth_schedule predictor_network get_collection reduce_sum _update_is_training ScalarLosses sum create_attack get IntervalBounds loss_builder linear_schedule info minimum constant REGULARIZATION_LOSSES losses maximum propagate_bounds dict int value hasattr RandomClassificationSpecification endswith _get_projection group ClassificationSpecification match getattr LeastLikelyClassificationSpecification _change_parameters _get_random_class _get_least_likely_class TargetedClassificationSpecification wrap_optimizer replace get_attack_builder attack_cls group RestartedAttack match eval getattr UnrolledAdam logits split convert_to_tensor int list minimum constant isinstance argmin len where eval append gather float max split uniform argmin shape concat reduce_max reshape LinearSpecification create_initial_backward_bounds eye list constant array range apply_linear materialise_conv conv_output_shape apply_batch_reshape apply_linear materialise_conv conv_output_shape apply_batch_reshape constant ones IntervalBounds identity array MockLinearModule constant concatenate zeros eye append gather array range | # Interval Bound Propagation for Training Verifiably Robust Models This repository contains a simple implementation of Interval Bound Propagation (IBP) using TensorFlow: [https://arxiv.org/abs/1810.12715](https://arxiv.org/abs/1810.12715). It also contains an implementation of CROWN-IBP: [https://arxiv.org/abs/1906.06316](https://arxiv.org/abs/1906.06316). It also contains a sentiment analysis example under [`examples/language`](https://github.com/deepmind/interval-bound-propagation/tree/master/examples/language) for [https://arxiv.org/abs/1909.01492](https://arxiv.org/abs/1909.01492). This is not an official Google product ## Installation | 1,879 |
deepsense-ai/carla-real-traffic-scenarios | ['autonomous driving'] | ['openDD: A Large-Scale Roundabout Drone Dataset'] | examples/runnable_template.py carla_real_traffic_scenarios/assets/actor_manager.py carla_real_traffic_scenarios/assets/blueprints.py carla_real_traffic_scenarios/reward.py carla_real_traffic_scenarios/artificial_lane_change/scenario.py carla_real_traffic_scenarios/utils/units.py carla_real_traffic_scenarios/opendd/dataset.py carla_real_traffic_scenarios/roundabouts/__init__.py carla_real_traffic_scenarios/utils/topology.py carla_real_traffic_scenarios/utils/geometry.py carla_real_traffic_scenarios/early_stop.py carla_real_traffic_scenarios/trajectory.py carla_real_traffic_scenarios/roundabouts/Town03/nodes.py carla_real_traffic_scenarios/utils/carla.py carla_real_traffic_scenarios/ngsim/__init__.py carla_real_traffic_scenarios/__init__.py carla_real_traffic_scenarios/roundabouts/route.py carla_real_traffic_scenarios/utils/pandas.py carla_real_traffic_scenarios/utils/transforms.py carla_real_traffic_scenarios/carla_maps.py carla_real_traffic_scenarios/vehicles.py carla_real_traffic_scenarios/artificial_lane_change/controller.py carla_real_traffic_scenarios/scenario.py carla_real_traffic_scenarios/utils/collections.py carla_real_traffic_scenarios/opendd/scenario.py carla_real_traffic_scenarios/roundabouts/types.py carla_real_traffic_scenarios/ngsim/scenario.py carla_real_traffic_scenarios/opendd/recording.py carla_real_traffic_scenarios/assets/utils.py setup.py carla_real_traffic_scenarios/ngsim/ngsim_recording.py carla_real_traffic_scenarios/artificial_lane_change/__init__.py carla_real_traffic_scenarios/assets/markings.py carla_real_traffic_scenarios/ngsim/cords_mapping.py CarlaMap CarlaMaps EarlyStop EarlyStopMonitor RewardType ScenarioStepResult ChauffeurCommand Scenario LaneChangeProgressMonitor Trajectory LaneAlignmentMonitor _get_nearest_location BoundingBox VehicleModel TeleportCommandsController _is_behind_ego_or_inside_birdview ArtificialLaneChangeScenario _calc_offset ActorManager randomize_attributes deserialize_json_file Marking serialize_to_json_file clone_location clone_transform import_json clone_rotation export_json NGSimToCarlaMapper NGSimRecording Simulator LaneChangeInstant NGSimCar _wp2str NGSimLaneChangeScenario NGSimDataset NGSimDatasets I80Timeslots DatasetMode US101Timeslots NGSimTimeslot Place OpenDDDataset OpenDDVehicle extract_utm_trajectory_from_df _find_ego_vehicle_with_time_frame _trim_trajectory_utm_to_entry_end_exit _determine_split _resample_df OpenDDRecording Utm2CarlaMapper Chauffeur _quantify_progress OpenDDScenario build_roundabout_checkpoint_route RouteCheckpoint CircleArea RoundaboutNode RoundaboutScenario debug_draw find_best_matching_model RealTrafficVehicle setup_carla_settings RealTrafficVehiclesInCarla CollisionSensor smallest_by find_first_matching Comparable normalize_angle jaccard_rectangles points_on_ring normalize_angle_npy swap_columns_inplace same_waypoint _unroll_waypoint same_lane Topology get_lane_id get_lane_ids distance_between resample_points Transform Vector3 Vector2 convert_to_vector2 distance_between_on_plane positions_to_transforms prepare_ego_vehicle prepare_opendd_scenario parser_args prepare_ngsim_scenario CarlaMap auto auto distance_matrix int ceil int norm arccos as_numpy dot clip join has_attribute map randint set_attribute export_json info import_json info debug debug NGSimTimeslot frozenset NGSimTimeslot frozenset US101 NGSimDataset I80 auto values int groupby reset_index set_index TIMESTAMP total_seconds TimedeltaIndex first to_list list _trim_trajectory_utm_to_entry_end_exit set min argmin distance_matrix array values int hexdigest append next_node range RouteCheckpoint center points_on_ring zip Location draw_point list apply_settings get_settings warning get_world key_fn predicate min array cos linspace pi sin drop get extend Queue put convert_from ndarray isinstance int list splprep splev stack linspace zip sum array append windowed normalized parse_args add_argument ArgumentParser get list choice load_world level_path get choice upper load_world getattr level_path Transform Rotation spawn_actor Location set_attribute find |     CARLA real traffic scenarios ======================== <p align="center"> <img width="100%" height="auto" alt="readme-main" | 1,880 |
deepsuncode/LSTM-flare-prediction | ['time series'] | ['Predicting Solar Flares Using a Long Short-Term Memory Network'] | DEMONSTRATION/LSTM_M_sample_run/LSTMflare.py DEMONSTRATION/LSTM_C_sample_run/LSTMflare.py DEMONSTRATION/LSTM_M5_sample_run/LSTMflare.py Table5_dataset_and_source_code/LSTMpredict.py DEMONSTRATION - Clean Version/LSTM_C_sample_run/LSTMflare.py DEMONSTRATION - Clean Version/LSTM_M_sample_run/LSTMflare.py DEMONSTRATION - Clean Version/LSTM_M5_sample_run/LSTMflare.py data_transform lstm attention_3d_block load_data data_transform lstm attention_3d_block load_data data_transform lstm attention_3d_block load_data data_transform lstm attention_3d_block load_data data_transform lstm attention_3d_block load_data data_transform lstm attention_3d_block load_data data_transform lstm partition_10_folds attention_3d_block load_data int insert print tolist shape read_csv append array range values len transform fit to_categorical LabelEncoder dot int concatenate Model Input attention_3d_block seed shuffle append round range len | Predicting Solar Flares Using a Long Short-term Memory Network Hao Liu, Chang Liu, Jason T. L. Wang and Haimin Wang We present a long short-term memory (LSTM) network for predicting whether an active region (AR) would produce a ϒ-class flare within the next 24 hours. We consider three ϒ classes, namely ≥M5.0 class, ≥M class, and ≥C class, and build three LSTM models separately, each corresponding to a ϒ class. Each LSTM model is used to make predictions of its corresponding ϒ-class flares. The essence of our approach is to model data samples in an AR as time series and use LSTMs to capture temporal information of the data samples. Each data sample has 40 features including 25 magnetic parameters obtained from the Space-weather HMI Active Region Patches (SHARP) and related data products as well as 15 flare history parameters. We survey the flare events that occurred from 2010 May to 2018 May, using the GOES X-ray flare catalogs provided by the National Centers for Environmental Information (NCEI), and select flares with identified ARs in the NCEI flare catalogs. These flare events are used to build the labels (positive vs. negative) of the data samples. Experimental results show that (i) using only 14-22 most important features including both flare history and magnetic parameters can achieve better performance than using all the 40 features together; (ii) our LSTM network outperforms related machine learning methods in predicting the labels of the data samples. To our knowledge, this is the first time that LSTMs have been used for solar flare prediction. References: Predicting Solar Flares Using a Long Short-term Memory Network. Liu, H., Liu, C., Wang, J. T. L., Wang, H., ApJ., 877:121, 2019. https://iopscience.iop.org/article/10.3847/1538-4357/ab1b3c https://arxiv.org/abs/1905.07095 https://web.njit.edu/~wangj/LSTMpredict/ | 1,881 |
deepsuncode/RNN-CME-prediction | ['time series'] | ['Predicting Coronal Mass Ejections Using SDO/HMI Vector Magnetic Data Products and Recurrent Neural Networks'] | CMEpredict/CMEpredict.py output_result lstm get_n_features_thresh attention_3d_block load_data gru int insert tolist read_csv append array range values len dot int concatenate Model Input attention_3d_block Model Input attention_3d_block read_csv values | Predicting Coronal Mass Ejections Using SDO/HMI Vector Magnetic Data Products and Recurrent Neural Networks Hao Liu, Chang Liu, Jason T. L. Wang and Haimin Wang We present two recurrent neural networks (RNNs), one based on gated recurrent units and the other based on long short-term memory, for predicting whether an active region (AR) that produces an M- or X-class flare will also produce a coronal mass ejection (CME). We model data samples in an AR as time series and use the RNNs to capture temporal information of the data samples. Each data sample has 18 physical parameters, or features, derived from photospheric vector magnetic field data taken by the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO). We survey M- and X-class flares that occurred from 2010 May to 2019 May using the Geostationary Operational Environmental Satellite's X-ray flare catalogs provided by the National Centers for Environmental Information (NCEI), and select those flares with identified ARs in the NCEI catalogs. In addition, we extract the associations of flares and CMEs from the Space Weather Database Of Notifications, Knowledge, Information (DONKI). We use the information gathered above to build the labels (positive versus negative) of the data samples at hand. Experimental results demonstrate the superiority of our RNNs over closely related machine learning methods in predicting the labels of the data samples. We also discuss an extension of our approach to predict a probabilistic estimate of how likely an M- or X-class flare will initiate a CME, with good performance results. To our knowledge this is the first time that RNNs have been used for CME prediction. References: Predicting Coronal Mass Ejections Using SDO/HMI Vector Magnetic Data Products and Recurrent Neural Networks. Liu, H., Liu, C., Wang, J. T. L., Wang, H., ApJ., 890:12, 2020 https://iopscience.iop.org/article/10.3847/1538-4357/ab6850 https://arxiv.org/abs/2002.10953 https://web.njit.edu/~wangj/RNNcme/ | 1,882 |
deepsuncode/SolarUnet | ['semantic segmentation'] | ['Identifying and Tracking Solar Magnetic Flux Elements with Deep Learning'] | solarunet.py statistics_analysis.py magnetic_tracking.py calculate_element_minimum_distance_old RoI put_num_on_element check_same_pos_neg forward find_the_radius element_region find_nearest_n_neighbor_contour element_pos_neg create_elements_flux_size_dict event_tracking_backward draw_contour_line_event size_filter_v1 check_with_threshold event_tracking_forward find_contour_line check_with_threshold_2 draw_contour_line_event_half calculate_element_average_distance find_nearest_n_neighbor create_elements_flux_dict find_and_draw_contour_line_new magnetic_tracking element_moveing_distance two_way calculate_element_minimum_distance size_filter backward pre_calulate_flux_and_contour read_data_files create_elements_contour_dict adjust_data pre_processing model_predicting test_generator plot_mask train_generator validation_generator solarUnet save_result model_training post_processing conv2_block plot_tracking_results analysis flux_to_MX pixel_to_Mm statistics_analysis_lifetime statistics_analysis_area_flux read_feature_lifetime_list_file data len bitwise_not flipud imread listdir open size_filter ones create_elements_flux_dict label create_elements_contour_dict range range range len len set map intersection len len append shape tolist asarray range len str LINE_AA putText FONT_HERSHEY_SIMPLEX mean round euclidean ceil shape euclidean ceil shape euclidean find_contour_line range len range len range len euclidean KDTree min query append array dict calculate_element_minimum_distance list keys dict calculate_element_minimum_distance list keys combinations list time find_nearest_n_neighbor_contour draw_contour_line_event RoI mean put_num_on_element round check_same_pos_neg abs range append len combinations list time find_nearest_n_neighbor_contour draw_contour_line_event_half draw_contour_line_event RoI mean put_num_on_element round check_same_pos_neg abs range append len element_region abs check_with_threshold_2 event_tracking_forward RoI mean put_num_on_element check_same_pos_neg check_with_threshold find_and_draw_contour_line_new round range append len list element_region abs check_with_threshold_2 event_tracking_backward RoI mean put_num_on_element check_same_pos_neg check_with_threshold find_and_draw_contour_line_new round keys range append len check_same_pos_neg list time element_region abs check_with_threshold_2 event_tracking_forward event_tracking_backward RoI mean put_num_on_element append check_with_threshold find_and_draw_contour_line_new round keys range len time format imwrite backward print pre_calulate_flux_and_contour read_data_files append forward range two_way concatenate Input Model load_weights conv2_block compile adjust_data list dict flow_from_directory zip ImageDataGenerator adjust_data list dict flow_from_directory zip ImageDataGenerator join reshape shape resize imread listdir join format imwrite round enumerate data listdir format imwrite print shape flipud nan fill empty range open data format imwrite close shape flipud empty nan fill imread listdir range open show subplots set_title set_xlabel imshow set_ylabel savefig imread show set_label subplots set_title axes set_xlabel subplots_adjust colorbar imshow set_ylabel savefig Normalize append tick_params imread listdir ModelCheckpoint train_generator solarUnet fit_generator test_generator print predict_generator save_result solarUnet data subplots tick_params abs values open show sorted list ones savefig legend imread readsav create_elements_flux_dict flipud label size_filter xlabel text hist show sorted readsav xlabel yticks ylabel hist savefig figure legend xticks read_feature_lifetime_list_file range len statistics_analysis_area_flux statistics_analysis_lifetime | # Identifying and Tracking Solar Magnetic Flux Elements with Deep Learning Haodi Jiang, Jiasheng Wang, Chang Liu, Ju Jing, Hao Liu, Jason T. L. Wang and Haimin Wang Institute for Space Weather Sciences, New Jersey Institute of Technology ## Abstract Deep learning has drawn significant interest in recent years due to its effectiveness in processing big and complex observational data gathered from diverse instruments. Here we propose a new deep learning method, called SolarUnet, to identify and track solar magnetic flux elements or features in observed vector magnetograms based on the Southwest Automatic Magnetic Identification Suite (SWAMIS). Our method consists of a data pre-processing component that prepares | 1,883 |
deezer/gravity_graph_autoencoders | ['graph clustering', 'link prediction'] | ['Variational Graph Auto-Encoders', 'Gravity-Inspired Graph Autoencoders for Directed Link Prediction'] | gravity_gae/model.py gravity_gae/layers.py gravity_gae/input_data.py gravity_gae/train.py setup.py gravity_gae/evaluation.py gravity_gae/initializations.py gravity_gae/preprocessing.py gravity_gae/optimizer.py sigmoid compute_scores weight_variable_glorot load_data dropout_sparse get_layer_uid pairwise_distance GravityInspiredDecoder GraphConvolution InnerProductDecoder SourceTargetInnerProductDecoder GraphConvolutionSparse Layer GravityGCNModelVAE GCNModelVAE GCNModelAE SourceTargetGCNModelAE Model GravityGCNModelAE SourceTargetGCNModelVAE OptimizerAE OptimizerVAE mask_test_edges_bidirectional_link_prediction sparse_to_tuple preprocess_graph construct_feed_dict mask_test_edges_general_link_prediction mask_test_edges_biased_negative_samples norm T lamb hstack square log dimension sigmoid dot average_precision_score append epsilon roc_auc_score sqrt random_uniform adjacency_matrix read_edgelist T identity sparse_retain floor cast transpose matmul reduce_sum data shape transpose tocoo flatten dot coo_matrix eye diags dict update int arange eliminate_zeros ones sparse_to_tuple hstack transpose shuffle delete choice dia_matrix csr_matrix floor unique append empty len int T arange eliminate_zeros ones sparse_to_tuple hstack csr_matrix shuffle delete sign dia_matrix floor fliplr T arange eliminate_zeros print sparse_to_tuple shuffle sign dia_matrix verbose triu fliplr | ## Gravity-Inspired Graph Autoencoders for Directed Link Prediction This repository provides Python code to reproduce experiments from the article [Gravity-Inspired Graph Autoencoders for Directed Link Prediction](https://arxiv.org/pdf/1905.09570.pdf) published in the proceedings of the 28th ACM International Conference on Information and Knowledge Management (CIKM 2019). We release Tensorflow implementations of the following **four directed graph embedding models** from the paper: - *Gravity-Inspired Graph Autoencoders* - *Gravity-Inspired Graph Variational Autoencoders* - *Source-Target Graph Autoencoders* - *Source-Target Graph Variational Autoencoders* together with standard *Graph Autoencoders (AE)* and *Graph Variational Autoencoders (VAE)* models from [Kipf and Welling (2016)](https://arxiv.org/pdf/1611.07308.pdf). We evaluate all six models on the **three directed link prediction tasks** introduced in section 4.1 of our paper: - *General Directed Link Prediction* | 1,884 |
deezer/spleeter | ['music source separation', 'speech enhancement'] | ['Spleeter: A Fast And State-of-the Art Music Source Separation Tool With Pre-trained Models'] | spleeter/utils/logging.py spleeter/model/functions/__init__.py spleeter/model/functions/blstm.py tests/test_github_model_provider.py spleeter/utils/tensor.py spleeter/audio/spectrogram.py tests/test_command.py spleeter/model/__init__.py spleeter/dataset.py spleeter/model/provider/__init__.py spleeter/__init__.py spleeter/utils/configuration.py spleeter/options.py spleeter/audio/__init__.py spleeter/audio/convertor.py spleeter/separator.py spleeter/utils/__init__.py spleeter/model/functions/unet.py spleeter/__main__.py spleeter/resources/__init__.py spleeter/audio/adapter.py spleeter/audio/ffmpeg.py tests/test_eval.py tests/test_train.py tests/test_separator.py spleeter/model/provider/github.py spleeter/types.py tests/__init__.py tests/test_ffmpeg_adapter.py InstrumentDatasetBuilder get_validation_dataset get_training_dataset DatasetBuilder version_callback create_estimator Separator DataGenerator SpleeterError entrypoint evaluate _compile_metrics train separate default AudioAdapter db_uint_spectrogram_to_gain gain_to_db spectrogram_to_db_uint to_n_channels to_stereo db_to_gain FFMPEGProcessAudioAdapter time_stretch random_pitch_shift pitch_shift random_time_stretch compute_spectrogram_tf Codec STFTBackend get_model_function model_fn InputProvider EstimatorSpecBuilder InputProviderFactory SpectralInputProvider WaveformInputProvider blstm apply_blstm apply_unet unet _get_conv_activation_layer softmax_unet _get_deconv_activation_layer apply compute_file_checksum GithubModelProvider ModelProvider load_configuration configure_logger TyperLoggerHandler pad_and_partition dataset_from_csv check_tensor_shape from_uint8_to_float32 pad_and_reshape from_float32_to_uint8 set_tensor_shape sync_apply test_version test_evaluate generate_fake_eval_dataset adapter audio_data test_save test_default_adapter test_load_error test_load test_checksum test_filename_conflict test_separate_to_file test_separator_backends test_separate test_filename_format generate_fake_training_dataset test_train DatasetBuilder DatasetBuilder echo get ConfigProto Estimator RunConfig get str partial train_and_evaluate load_configuration Estimator EvalSpec writeProbe info configure_logger ConfigProto TrainSpec join str error configure_logger separate_to_file join from_tuples product glob append median DataFrame join list items glob DB eval_mus_dir _compile_metrics info configure_logger separate spleeter resize_images random_uniform resize_images random_uniform join import_module getattr EstimatorSpecBuilder he_uniform get he_uniform partial conv_activation_layer deconv_activation_layer info _get_conv_activation_layer _get_deconv_activation_layer append stack apply_unet enumerate function sha256 startswith filterwarnings ERROR set_verbosity DEBUG setLevel get_logger INFO list concat shape func values reduce_max reduce_min shape pad len floormod reshape concat shape tile zeros from_tensor_slices read_csv constant logical_and equal enumerate set_shape invoke CliRunner join RandomState rand default save range makedirs default load list Separator _separate_tensorflow _stft _separate_librosa keys default load separate Separator default Separator Separator Separator join RandomState rand to_csv default save DataFrame range makedirs | <img src="https://github.com/deezer/spleeter/raw/master/images/spleeter_logo.png" height="80" /> [](https://github.com/deezer/spleeter/actions)  [](https://badge.fury.io/py/spleeter) [](https://anaconda.org/deezer-research/spleeter) [](https://hub.docker.com/r/deezer/spleeter) [](https://colab.research.google.com/github/deezer/spleeter/blob/master/spleeter.ipynb) [](https://gitter.im/spleeter/community) [](https://joss.theoj.org/papers/259e5efe669945a343bad6eccb89018b) > :warning: [Spleeter 2.1.0](https://pypi.org/project/spleeter/) release introduces some breaking changes, including new CLI option naming for input, and the drop > of dedicated GPU package. Please read [CHANGELOG](CHANGELOG.md) for more details. ## About **Spleeter** is [Deezer](https://www.deezer.com/) source separation library with pretrained models written in [Python](https://www.python.org/) and uses [Tensorflow](https://tensorflow.org/). It makes it easy to train source separation model (assuming you have a dataset of isolated sources), and provides already trained state of the art model for performing various flavour of separation : * Vocals (singing voice) / accompaniment separation ([2 stems](https://github.com/deezer/spleeter/wiki/2.-Getting-started#using-2stems-model)) | 1,885 |
deezer/w2v_reco_hyperparameters_matter | ['word embeddings'] | ['Word2Vec applied to Recommendation: Hyperparameters Matter'] | src/main.py word2vec.py src/data.py train_sg_pair Word2VecVocab score_sg_pair Word2VecTrainables BrownCorpus Text8Corpus LineSentence PathLineSentences score_cbow_pair Word2Vec train_cbow_pair get_data cold_start mean_confidence_interval run syn0_ngrams syn0_vocab expit syn0_lockf code syn0_ngrams_lockf hs searchsorted shape append syn0 np_sum syn0_vocab_lockf negative zeros deepcopy T dot neg_labels randint len T expit syn0_lockf code syn0_ngrams_lockf randint hs searchsorted dot shape neg_labels syn0_ngrams syn0_vocab_lockf negative syn0 syn0_vocab zeros append deepcopy code code to_dict array tolist load list endswith tolist choice startswith range len _ppf array len NearestNeighbors time print reshape len tqdm Word2Vec fit | # Getting Started This repository can be used to reproduce results of "Word2vec applied to Recommendation: Hyperparameters Matter" by H. Caselles-Dupré, F. Lesaint and J. Royo-Letelier. The paper will be published on the 12th ACM Conference on Recommender Systems, Vancouver, Canada, 2nd-7th October 2018. ## Usage with Docker [recommended] ### Install `git clone https://github.com/deezer/w2v_reco_hyperparameters_matter.git` `cd w2v_reco_hyperparameters_matter` `docker build -t w2v_reco_hyperparameters_matter .` ### Run To reproduce results in *Table 1: Next Event Prediction*, line *Fully optimized SGNS* from paper: `docker run -ti --name=music_1_sgns w2v_reco_hyperparameters_matter:latest /bin/bash -c "python src/main.py --path_data='data/music_1.npy' --p2v=1 --window_size=3 --it=110 --sample=0.00001 --power_alpha=-0.5"` | 1,886 |
delair-ai/DISIR | ['remote sensing image classification', 'interactive segmentation', 'semantic segmentation'] | ['DISIR: Deep Image Segmentation with Interactive Refinement'] | train/semantic_segmentation/models/segnet.py train/semantic_segmentation/loaders/SegDataLoader.py train/semantic_segmentation/models/utils/decoder.py train/semantic_segmentation/models/lednet.py qgis_plugin/test/test_interact_learn_dialog.py train/semantic_segmentation/export_graph.py qgis_plugin/frontend/core.py qgis_plugin/frontend/interact_learn.py train/semantic_segmentation/models/utils/batchnorm.py train/semantic_segmentation/models/utils/jpu.py qgis_plugin/test/qgis_interface.py qgis_plugin/frontend/dialogs/layers_dialog.py train/semantic_segmentation/__main__.py qgis_plugin/frontend/dialogs/class_dialog.py qgis_plugin/frontend/dialogs/interact_learn_dialog.py qgis_plugin/test/test_resources.py train/semantic_segmentation/loaders/sparsifier.py qgis_plugin/backend/utils.py train/semantic_segmentation/models/linknet34.py qgis_plugin/test/test_translations.py qgis_plugin/backend/daemon.py qgis_plugin/help/source/conf.py qgis_plugin/test/__init__.py train/semantic_segmentation/models/backbone/resnet.py train/semantic_segmentation/utils/metrics.py train/semantic_segmentation/models/backbone/__init__.py train/semantic_segmentation/loaders/loaders.py train/semantic_segmentation/utils/image.py train/semantic_segmentation/models/d3net.py qgis_plugin/frontend/utils.py qgis_plugin/backend/ssh_connexion.py qgis_plugin/frontend/set_points.py qgis_plugin/plugin_upload.py qgis_plugin/test/test_init.py train/semantic_segmentation/models/backbone/xception.py train/semantic_segmentation/models/utils/replicate.py train/semantic_segmentation/models/utils/unittest.py qgis_plugin/frontend/coords_tool.py qgis_plugin/backend/__main__.py qgis_plugin/frontend/dialogs/__init__.py qgis_plugin/resources.py train/format_gt.py train/semantic_segmentation/params.py train/semantic_segmentation/models/blocks.py train/semantic_segmentation/models/backbone/drn.py qgis_plugin/frontend/__init__.py train/semantic_segmentation/train.py train/semantic_segmentation/trainer.py train/semantic_segmentation/models/utils/__init__.py train/semantic_segmentation/models/backbone/mobilenet.py train/semantic_segmentation/models/deeplab.py train/semantic_segmentation/models/erfnet.py train/semantic_segmentation/models/utils/aspp.py qgis_plugin/test/utilities.py train/semantic_segmentation/models/unet.py train/semantic_segmentation/models/utils/comm.py train/semantic_segmentation/classic_trainer.py qgis_plugin/__init__.py qgis_plugin/test/test_qgis_environment.py train/semantic_segmentation/models/__init__.py main hide_password qCleanupResources qInitResources Daemon SshConnexion make_batches find_n_classes check_inputs_and_net vec_to_list polygonize print_warning from_coord_to_patch main parse_args CoordsTool QgsCorePlugin InteractLearn SetPoints client_to_server qgs_func set_renderer_vector raster_to_file set_renderer_raster WarnQgs file_in_layers get_layers random_colors find_file_from_layer classFactory ClassDialog InteractLearnDialog LayersDialog QgisInterface TestInit InteractLearnDialogTest QGISTest InteractLearnDialogTest SafeTranslationsTest get_qgis_app convert_from_color format_gt reformat_gt ClassicTrainer export_graph config_factory MLConfig train Trainer cli GTDataset RGBDataset SegDataLoader Sparsifier Decoder Encoder34 Encoder SeparableConv2d_BN BasicBlock denseUnet121 BasicBlockCU D3Net weights_init_orthogonal get_decoder_block weights_init_uniform BasicBlock _Transition get_norm_layer center_crop BilinearBlock get_conv_type _DenseBlock define_G DenseUNet weights_init_xavier init_weights _TransitionUp weights_init_kaiming BasicBlockToCrop conv4x4 weights_init_normal BasicBlock2 conv3x3 _DenseLayer BasicBlock5x5 UpsampleBlock DeepLab Decoder DownsamplerBlock Encoder UpsamplerBlock ERFNet non_bottleneck_1d APN_Module SS_nbt_module Interpolate Decoder DownsamplerBlock Encoder channel_shuffle LEDNet Conv2dBnRelu split LinkNet34 SegNet outconv up double_conv UNet down inconv drn_d_54 drn_c_58 drn_d_40 drn_d_38 drn_c_26 Bottleneck drn_d_105 DRN_A drn_d_22 conv3x3 DRN drn_a_50 drn_d_24 drn_c_42 BasicBlock fixed_padding InvertedResidual conv_bn MobileNetV2 ResNet ResNet101 Bottleneck fixed_padding Block AlignedXception SeparableConv2d build_backbone build_aspp _ASPPModule ASPP _sum_ft SynchronizedBatchNorm2d _unsqueeze_ft _SynchronizedBatchNorm SynchronizedBatchNorm1d SynchronizedBatchNorm3d SyncMaster FutureResult SlavePipe Decoder build_decoder JPU SeparableConv2d execute_replication_callbacks CallbackContext DataParallelWithCallback patch_replication_callback TorchTestCase as_numpy sliding_window from_coord_to_patch grouper f1_score IoU accuracy print format ServerProxy hide_password find qRegisterResourceData qUnregisterResourceData tuple islice parameters sum append stack print str add_argument ArgumentParser Daemon parse_args ssh run enumerate find_file_from_layer get_layers append BeautifulSoup htmlMetadata get_layers Listener recv accept Exception close Client send list str QColor geometryType QgsCategorizedSymbolRenderer setRenderer random_colors setColor QgsRendererCategory append range setOpacity defaultSymbol int str QgsRasterShader QColor setRasterShaderFunction QgsSingleBandPseudoColorRenderer dataProvider Interpolated setOpacity setColorRampItemList QgsColorRampShader setRenderer random_colors ColorRampItem triggerRepaint append setColorRampType range pyqtSignal QWidget QSize argv resize debug QgisInterface QgsMapCanvas QgsApplication initQgis showSettings join glob print tqdm eval reformat_gt input zeros list all items load join basename replace model print config_factory exit Trainer eval load_weights assert_almost_equal save_to_jit numpy ib N_CLASSES arange strip PATH_MODELS seed basicConfig basename ClassicTrainer SAVE_FOLDER config_factory EPOCHS range test load_weights info manual_seed print makedirs get_conv_type normal_ __name__ fill_ data uniform constant __name__ data constant xavier_normal uniform __name__ data constant uniform kaiming_normal_ __name__ data constant orthogonal uniform __name__ apply update list DenseUNet group init_weights match load_state_dict keys compile state_dict size BatchNorm2d partial InstanceNorm2d init_weights denseUnet121 int contiguous round size view contiguous load_url load_state_dict DRN_A load_url DRN load_state_dict load_url DRN load_state_dict load_url DRN load_state_dict load_url DRN load_state_dict load_url DRN load_state_dict load_url DRN load_state_dict load_url DRN load_state_dict load_url DRN load_state_dict load_url DRN load_state_dict pad ResNet list hasattr __data_parallel_replicate__ modules enumerate len replicate data isinstance range tuple islice iter asarray logical_and NaN append sum range reshape logical_and sk_f1 NaN append sum range | <img src="https://github.com/delair-ai/DISIR/blob/master/imgs/logo-delair.png" alt="drawing" width="200" align="left"/> <img src="https://github.com/delair-ai/DISIR/blob/master/imgs/logo-onera.png" alt="drawing" width="200" align="right"/> <br /> # Presentation This repository contains the code of **DISIR**: Deep Image Segmentation with Interactive Refinement. In a nutshell, it consists in neural networks trained to perform semantic segmentation with human guidance. You may refer to our [paper](https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/V-2-2020/877/2020/isprs-annals-V-2-2020-877-2020.pdf) for detailed explanations. This repository is divided into two parts: - `train` which contains the training code of the networks ([README](./train/README.md)) - `qgs_plugin` which contains the code of the QGIS plugin used to perform the interactive segmentation ([README](./qgis_plugin/README.md)) # Install Python dependencies ``` | 1,887 |
dennerepin/StochNetV2 | ['time series'] | ['Automated Deep Abstractions for Stochastic Chemical Reaction Networks'] | stochnet_v2/scripts/train_search.py stochnet_v2/utils/layer_prepostprocess.py stochnet_v2/CRN_models/X44.py stochnet_v2/scripts/simulate_histogram_data_gillespy.py stochnet_v2/static_classes/top_layers.py stochnet_v2/static_classes/trainer.py stochnet_v2/dynamic_classes/util.py stochnet_v2/dataset/simulation_gillespy.py stochnet_v2/CRN_models/X40.py stochnet_v2/dynamic_classes/nn_body_search.py stochnet_v2/CRN_models/X16.py stochnet_v2/scripts/evaluate.py stochnet_v2/scripts/format_data_for_training.py stochnet_v2/scripts/simulate_data_kappy.py stochnet_v2/static_classes/nn_bodies.py stochnet_v2/utils/util.py stochnet_v2/dataset/dataset.py stochnet_v2/CRN_models/X47.py stochnet_v2/CRN_models/EGFR.py stochnet_v2/CRN_models/Gene.py stochnet_v2/utils/registry.py stochnet_v2/CRN_models/Bees.py stochnet_v2/dynamic_classes/trainer.py stochnet_v2/dynamic_classes/op_registry.py stochnet_v2/CRN_models/base.py stochnet_v2/dynamic_classes/nn_body.py stochnet_v2/dynamic_classes/model.py stochnet_v2/__init__.py stochnet_v2/scripts/simulate_data_gillespy.py stochnet_v2/utils/errors.py stochnet_v2/utils/file_organisation.py stochnet_v2/static_classes/model.py stochnet_v2/dataset/simulation_kappy.py stochnet_v2/utils/benchmarking.py setup.py stochnet_v2/CRN_models/SIR.py stochnet_v2/utils/evaluation.py stochnet_v2/static_classes/random_variables.py stochnet_v2/utils/luigi_workflow.py stochnet_v2/utils/luigi_workflow_kappy.py stochnet_v2/scripts/simulate_histogram_data_kappy.py stochnet_v2/static_classes/grid_runner.py stochnet_v2/dynamic_classes/genotypes.py stochnet_v2/scripts/train_static.py _get_requirements BaseCRNModel BaseSBMLModel Bees EGFR Gene SIR X16 X40 X44 X47 DataTransformer BaseDataset HDF5Dataset TFRecordsDataset _concatenate_simulations _stack_simulations _perform_simulations _single_simulation build_simulation_dataset _save_simulation_data _find_var_value _perform_simulations _single_simulation build_simulation_dataset get_random_initial_settings _body_search _body_trained NASStochNet cell body get_genotypes cell body expand_op cat_onehot mixed_op_cat mixed_op simple_dense none skip_connect relu swish gated_linear_unit activated_dense element_wise Trainer postprocess _expand_element_wise l2_regularizer _gated_linear_unit cell_is_expanding _simple_dense _expand_identity preprocess _activated_dense l1_regularizer main main main main main get_histogram_settings main main main Model _single_trace _save_simulation_data GridRunner generate_gillespy_traces StochNet _get_mixture block_c block_b lstm block_a body_lstm body_main body_a body_b gru Mixture MultivariateLogNormalTriL MultivariateNormalTriL Categorical MultivariateNormalDiag softplus_activation MultivariateLogNormalTriLOutputLayer CategoricalOutputLayer _softplus_inverse MultivariateNormalTriLOutputLayer MixtureOutputLayer RandomVariableOutputLayer nn_elu_activation MultivariateNormalDiagOutputLayer Trainer benchmark_ssa benchmark benchmark_nn NotRestoredVariables DimensionError ShapeError _get_histograms get_distance _iou_distance _get_data_bounds evaluate get_nn_histogram_data get_gillespy_histogram_data _histogram_distance HistogramFileExplorer ProjectFileExplorer DatasetFileExplorer maybe_create_dir ModelFileExplorer noam_norm layer_prepostprocess comma_separated_string_to_integer_list l2_norm apply_norm dropout_with_broadcast_dims layer_preprocess layer_norm cast_like layer_postprocess TrainSearch GenerateDataset GenerateHistogramData Evaluate GlobalParams FormatDataset TrainStatic TrainSearch GenerateDataset GenerateHistogramData Evaluate GlobalParams FormatDataset TrainStatic normalize_to_list Registry random_pick_traces get_transformed_tensor graph_def_to_graph plot_random_traces apply_regularization visualize_genotypes copy_graph merge_species_and_param_settings maybe_create_dir plot_traces _single_trace visualize_description plot_trace str_to_bool get_cmap numpy_softmax postprocess_description_dict generate_gillespy_traces _concatenate_simulations _stack_simulations _perform_simulations info str join remove list concatenate tqdm range str join remove list concatenate tqdm range load join get_randomized_parameters starmap partial cpu_count close crn_class import_module getattr info append Pool range list concatenate ones set_parameters set_species_initial_value array _save_simulation_data values run str join save info join get_random_initial_settings SimulationParameter simulation_delete min stack project_parse append KappaStd shutdown wait_for_simulation_stop range add_model_string simulation_start next splitlines print dict append randint max range as_list reshape as_list reshape expand_reduce list normal_reduce zip len cell_is_expanding range identity len constant boolean_mask add_n append bool enumerate sample one_hot Categorical reduce_sum stack cat_onehot append expand_dims enumerate list cell_is_expanding Genotype _parse append range ndims _expand_identity _expand_identity int int int time_lag_range model_id ArgumentParser dataset_id list get_dataset_file_explorer ProjectFileExplorer timestep nb_past_timesteps get_histogram_file_explorer map parse_args distance_kind project_folder join evaluate add_argument model_histogram_folder model_name nb_randomized_params split seed test_fraction str_to_bool DataTransformer save_fn random_seed positivity time save_data_for_ml_hdf5 save_format dataset_fp nb_trajectories save endtime get_initial_settings getattr build_simulation_dataset import_module settings_fp info nb_settings dataset_folder randint sorted File close histogram_dataset_fp histogram_settings_fp NASStochNet batch_size dataset_kind stddev mixture_config_path n_epochs_heat_up nb_features n_epochs_finetune n_epochs_arch n_epochs_interval n_epochs_main get_model_file_explorer body_config_path add_noise train StochNet n_epochs list set_parameters dict zip run set_species_initial_value array _save_simulation_data len int starmap str partial join remove concatenate cpu_count close rmtree maybe_create_dir timespan linspace Pool range len items list component_class warning append activation add activation add activation block_fn range block_fn activation_fn range block_fn MultiRNNCell dynamic_rnn as_list reshape body_fn info debug debug time generate_traces generate_gillespy_traces get_initial_settings time generate_gillespy_traces get_initial_settings time generate_traces get_initial_settings load histogram_dataset_fp get_dataset_file_explorer ProjectFileExplorer load get_dataset_file_explorer ProjectFileExplorer error histogram_dataset_fp StochNet generate_traces save info min max delete delete shape histogram append sum range shape abs sum minimum maximum mean shape sum minimum list _get_histograms _get_data_bounds distance_fn maximum zip array get_nn_histogram_data cpu_count histogram_folder Pool list get_dataset_file_explorer ProjectFileExplorer get_distance distance_fn histogram_dataset_fp get_histogram_file_explorer map ylabel shape title maybe_create_dir getattr savefig legend dirname range partial plot close import_module info pardir load join time xlabel tqdm figure get_species_names array len makedirs convert_to_tensor dtype cast shape get_shape len Parameter FloatParameter IntParameter endtime nb_trajectories timestep nb_settings params_to_randomize model_name project_folder dataset_id random_seed positivity timestep nb_past_timesteps nb_randomized_params test_fraction project_folder save_format dataset_id random_seed nb_histogram_settings timestep params_to_randomize nb_histogram_trajectories model_name project_folder histogram_endtime dataset_id random_seed add_noise batch_size mixture_config_path timestep nb_past_timesteps stddev nb_randomized_params model_id project_folder n_epochs dataset_id nb_features body_config_path n_epochs_finetune n_epochs_interval add_noise n_epochs_arch batch_size mixture_config_path timestep nb_past_timesteps n_epochs_heat_up nb_randomized_params model_id stddev project_folder save_format dataset_id n_epochs_main nb_features body_config_path distance_kind timestep nb_past_timesteps nb_randomized_params model_id time_lag_range settings_idxs_to_save_histograms model_name project_folder target_species_names dataset_id Parameter IntParameter Parameter FloatParameter Parameter FloatParameter IntParameter Parameter FloatParameter IntParameter Parameter FloatParameter IntParameter Parameter FloatParameter IntParameter isinstance upper startswith rmtree export_meta_graph name Graph show get_cmap range plot show enumerate plot_trace shape random_pick_traces plot_trace map stack add_to_collection REGULARIZATION_LOSSES exp max pop fill_diagonal transpose matmul shape zeros numpy_softmax range list subplots set_title suptitle isinstance set_axis_off tight_layout colorbar set_ticks imshow maybe_create_dir append range enumerate len expand_reduce normal subgraph node normal_reduce edge extend Digraph expand render range enumerate len append range len | # StochNetV2 Toolbox for stochastic simulations with CRN models or their deep abstractions. \ Abstract models are based on neural networks predicting a distribution to sample next system state. The method is described in details here: https://doi.org/10.1016/j.ic.2021.104788, https://arxiv.org/abs/2002.01889, https://link.springer.com/chapter/10.1007/978-3-030-59854-9_4. ## Installation. For *Anaconda* or *Miniconda*: Create virtual environment: ```bash $ conda create -n MyEnv python=3.6 ``` | 1,888 |
denproc/Taming-VAEs | ['density estimation'] | ['Importance Weighted Autoencoders'] | conv_draw.py train.py IWAE_GECO.py datasets.py main.py utils.py model.py BaseAttention Conv2dLSTMCell DRAW ConvolutionalDRAW FilterBankAttention MNIST CELEBA CIFAR10 train_geco RE KL_divergence IWAE_KL IVAE VAE train_beta train_geco_draw draw_hist KL_divergence train_geco loss_beta_vae RE_mtr RE log_likelihood train_beta_draw Compute_NLL sample_vae marginal_KL exp size pow repeat sum subplots model zero_grad show clear_output set_title KL_term legend append to range format plot mean constraint_f time backward print train step show subplots set_title plot legend time clear_output format model backward constraint_f train step zero_grad print mean draw_hist append to sum range draw_hist KL_divergence time clear_output format backward print train step zero_grad draw_hist append loss_beta_vae to forward range time RE_mtr clear_output format model backward print train step zero_grad mean draw_hist append to sum range sample decode to reconstruction_mu to model to model | # Taming VAEs ### Bayesian Methods of Machine Learning Course Project Main paper: https://arxiv.org/abs/1810.00597 IWAE: https://arxiv.org/pdf/1509.00519.pdf ##### Project Goals: * Reproduce experiments (see Figure 2 and Figure 6) from the main paper * Change ELBO, using IWAE bound (see the second paper) * Repeat experiments with the new functional * Compare the results #### Team: Kuzina Anna, Prokopenko Denis, Shumovskaia Valentina | 1,889 |
desh2608/dover-lap | ['speech recognition', 'speaker diarization', 'graph partitioning'] | ['DOVER: A Method for Combining Diarization Outputs', 'Reformulating DOVER-Lap Label Mapping as a Graph Partitioning Problem'] | dover_lap/src/label_mapping.py dover_lap/src/__init__.py dover_lap/src/label_voting.py dover_lap/libs/turn.py dover_lap/dover_lap.py dover_lap/libs/__init__.py dover_lap/src/mapping/__init__.py dover_lap/libs/utils.py dover_lap/libs/rttm.py dover_lap/__init__.py dover_lap/src/doverlap.py setup.py dover_lap/libs/uem.py dover_lap/src/mapping/map_utils.py dover_lap/src/mapping/hungarian.py dover_lap/src/mapping/greedy.py main load_rttms write_rttm validate_rttm _parse_rttm_line load_rttm merge_turns trim_turns chop_tree Turn UEM load_uem groupby command_required_option PythonLiteralOption error warn format_float xor info clip DOVERLap LabelMapping LabelVoting GreedyMap HungarianMap compute_spk_overlap get_speaker_keys append load_rttm seed groupby list format combine_turns_list shuffle dict load_rttms info append load_uem sum write_rttm values float strip split begin from_tuples list sorted groupby end merge_overlaps extend Turn append len update begin envelop data discard end add set Interval at begin from_tuples update data sorted Turn groupby end warn set UEM chop_tree append chop print print print sorted append sorted sorted keys enumerate | # DOVER-Lap Official implementation for [DOVER-Lap: A method for combining overlap-aware diarization outputs](https://arxiv.org/pdf/2011.01997.pdf). ## Installation ```shell pip install dover-lap ``` ## How to run After installation, run ```shell dover-lap [OPTIONS] OUTPUT_RTTM [INPUT_RTTMS]... | 1,890 |
deshumake/SORT | ['multiple object tracking'] | ['Simple Online and Realtime Tracking'] | sort.py KalmanBoxTracker iou Sort convert_bbox_to_z associate_detections_to_trackers convert_x_to_bbox parse_args minimum maximum float sqrt linear_assignment iou concatenate reshape append zeros empty enumerate add_argument ArgumentParser | SORT ===== A simple online and realtime tracking algorithm for 2D multiple object tracking in video sequences. See an example [video here](https://motchallenge.net/movies/ETH-Linthescher-SORT.mp4). By Alex Bewley ### Introduction SORT is a barebones implementation of a visual multiple object tracking framework based on rudimentary data association and state estimation techniques. It is designed for online tracking applications where only past and current frames are available and the method produces object identities on the fly. While this minimalistic tracker doesn't handle occlusion or re-entering objects its purpose is to serve as a baseline and testbed for the development of future trackers. SORT was initially described in an [arXiv tech report](http://arxiv.org/abs/1602.00763). At the time of the initial publication, SORT was ranked the best *open source* multiple object tracker on the [MOT benchmark](https://motchallenge.net/results/2D_MOT_2015/). This code has been tested on Mac OSX 10.10, and Ubuntu 14.04, with Python 2.7 (anaconda). **Note:** A significant proportion of SORT's accuracy is attributed to the detections. | 1,891 |
desimone/Musculoskeletal-Radiographs-Abnormality-Classifier | ['anomaly detection'] | ['MURA: Large Dataset for Abnormality Detection in Musculoskeletal Radiographs'] | train.py mura.py metrics.py eval.py download_and_convert_mura.py pytorch/train.py pytorch/dataloader.py ImageString SKLearnMetrics Mura train MuraDataset compile compile layers class_indices MobileNet TensorBoard strftime Model flow_from_directory samples parse_args fit_generator load_weights resume unique classes ImageDataGenerator compile EarlyStopping output ModelCheckpoint compute_class_weight len compile | # Musculoskeletal Radiographs Abnormality Classifier ## Experiments | Network | Accuracy (encounter) | Precision (encounter) | Recall (encounter) | F1 (encounter) | Kappa (encounter) | | ---------------------- | -------------------- | --------------------- | ------------------ | -------------- | ----------------- | | DenseNet169 (baseline) | .83 (.84) | .82 (.82) | .87 (.90) | .84 (.86) | .65 (.65) | | MobileNet | .81 (.83) | .80 (.82) | .85 (.89) | .82 (.85) | .62 (.62) | | NASNetMobile | .82 (.83) | .78 (.80) | .89 (.92) | .83 (.86) | .63 (.63) | Also, ResNet50 in pytorch which achieved equivalent results. ## The [Mura](https://arxiv.org/abs/1712.06957) Dataset ```latex | 1,892 |
desire2020/RankGAN | ['text generation'] | ['SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient', 'Adversarial Ranking for Language Generation'] | rollout.py model_settings.py discriminator.py sequence_gan.py dataloader.py generator.py target_lstm.py Gen_Data_loader Dis_dataloader cosine_distance linear get_rank_score Discriminator highway Generator ROLLOUT generate_samples pre_train_epoch main target_loss TARGET_LSTM as_list while_loop shape int range extend generate next_batch num_batch append pretrain_loss reset_pointer range run pretrain_step num_batch append next_batch reset_pointer range Dis_dataloader num_batch Gen_Data_loader create_batches target_loss TARGET_LSTM Session ROLLOUT run seed open str Generator Discriminator generate pre_train_epoch range load_train_data close get_reward update_params ConfigProto load generate_samples print write global_variables_initializer next_batch real_data_vocab_size reset_pointer | # RankGAN ## Requirements: * **Tensorflow r1.6.0** * Python 3.x * CUDA 9.0 (For GPU) ## Introduction Apply Generative Adversarial Nets to generating sequences of discrete tokens with optimization via replacing the discriminator with a ranker. The previous research paper [SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient](http://arxiv.org/abs/1609.05473) has been accepted at the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17). The research paper [Adversarial Ranking for Language Generation](https://papers.nips.cc/paper/6908-adversarial-ranking-for-language-generation.pdf) has been accepted at 31st Conference on Neural Information Processing Systems (NIPS 2017). We reproduce example codes to repeat the synthetic data experiments with oracle evaluation mechanisms. | 1,893 |
destiny301/dnc | ['automl'] | ['Deep-n-Cheap: An Automated Search Framework for Low Complexity Deep Learning'] | model_search.py main.py model/tf_model.py model/torch_model.py ei downsample bayesopt gp_predict dropout_mlp form_shortcuts_start_every form_shortcuts kernelfunc_se run_model_search_cnn batch_norm dropout convert_keys default_weight_decay distancefunc_ramp covmat shortcut_conns lossfunc get_states run_model_search_mlp Net run_network get_numparams get_data_npz get_numparams get_data_torchvision train_batch run_network Hook get_data_npz Net eval_data save_net append get_numparams range minimum list asarray rand i4_sobol_generate astype item append randint range form_shortcuts get_numparams run_network log10 max default_weight_decay minimum distancefunc distancefunc_ramp kernelfunc zeros sum range enumerate len ones covmat T len cdf pdf update ei format asarray print ones covmat len inv lossfunc gp_predict get_states argsort eye append zeros range enumerate asarray format product print lossfunc append range len int asarray arange format print astype lossfunc ceil zeros append int asarray arange format print astype lossfunc ceil zeros append enumerate format asarray print lossfunc append format asarray print lossfunc append minimum update time format asarray get_numparams dropout batch_norm print shortcut_conns downsample bayesopt append range enumerate len update time format asarray print bayesopt dropout_mlp append range enumerate len Net sum build argmax max SparseCategoricalCrossentropy format evaluate print EarlyStopping Adam build TimeHistory Net DataLoader array item sum LearningRateScheduler compile fit load int len as_tensor int list Compose Subset dataset range len backward zero_grad lossfunc item step max net eval save state_dict data train_batch MultiStepLR DataParallel round mlp device_count randperm eval_data to CrossEntropyLoss range bias_init time tqdm parameters wt_init train step len | # deep-n-cheap  ## Welcome This repository implements _Deep-n-Cheap_ – an AutoML framework to search for deep learning models. Features include: - **Complexity oriented**: Get models with good performance and low training time or parameter count - Cuttomizable search space for both architecture and training hyperparameters - Supports CNNs and MLPs **Highlight**: State-of-the-art performance on benchmark and custom datasets with training time orders of magnitude lower than competing frameworks and NAS efforts. **Research paper** available on [arXiv](https://arxiv.org/abs/2004.00974). Please consider citing it if you use or benefit from this work. ## How to run? - Install Python 3 | 1,894 |
devAmoghS/Keras-Style-Transfer | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | main.py deprocess_image Evaluator gram_matrix total_variational_loss content_loss preprocess_image style_loss expand_dims preprocess_input img_to_array load_img astype dot transpose batch_flatten permute_dimensions gram_matrix square | # Keras-Style-Transfer (KeSTra) An implementation of "A Neural Algorithm of Artistic Style" (http://arxiv.org/abs/1508.06576) in Keras The code present in this repository is presented in this [blog](https://medium.com/@singhal.amogh1995/utilising-cnns-to-transform-your-model-into-a-budding-artist-1330dc392e25). The code is written in Keras 2.2.2 # Preview This is a 5-sec gif of **Chicago city** painted in the style of **Rain Princess** <p align="center"> <img src="https://media.giphy.com/media/i4ElhKepMTcIZiqcma/giphy.gif" width="480" height="270"/> </p> ### Content Image and Style Image | 1,895 |
develooper1994/MasterThesis | ['speech enhancement'] | ['SEGAN: Speech Enhancement Generative Adversarial Network', 'Whispered-to-voiced Alaryngeal Speech Conversion with Generative Adversarial Networks', 'Language and Noise Transfer in Speech Enhancement Generative Adversarial Network'] | clean.py not_functional/models/utils/file_logger.py select_speakers.py segan/models/modules.py not_functional/main.py not_functional/models/architectures/Generators/WaveGANGenerator.py segan/models/generator.py not_functional/module_testroom.py not_functional/models/optimizers/__init__.py segan/datasets/vc_dataset.py segan/models/discriminator.py not_functional/models/architectures/Discriminators/BaseDiscriminator.py not_functional/models/architectures/WaveGAN_standalone.py train.py not_functional/models/custom_transforms/custom_transforms.py not_functional/models/DefaultTrainBuilder.py not_functional/models/architectures/Discriminators/WaveGANDiscriminator.py not_functional/models/architectures/WaveGAN.py not_functional/models/utils/BasicUtils.py segan/datasets/data_tools/io.py not_functional/models/optimizers/BaseOptimizer.py segan/models/ops.py not_functional/models/utils/visualization/visualization.py not_functional/models/DataLoader/custom_DataLoader.py segan/datasets/__init__.py not_functional/models/Trainers/DefaultTrainer.py segan/models/__init__.py not_functional/models/Trainers/WaveganTrainer.py segan/__init__.py segan/datasets/data_tools/error_metrics.py not_functional/models/utils/visualization/matplotlib_visualization.py not_functional/models/architectures/layers/BaseLayers.py not_functional/models/architectures/Segan.py not_functional/models/utils/WaveGANUtils.py segan/utils.py not_functional/models/utils/BaseGANUtils.py segan/datasets/se_dataset.py segan/datasets/data_tools/interpolate.py segan/models/spectral_norm.py not_functional/config.py showFFT.py not_functional/models/Trainers/TrainingUtility.py eval_noisy_performance.py not_functional/models/losses/BaseLoss.py not_functional/models/DataLoader/DataLoader.py purge_ckpts.py not_functional/models/architectures/Generators/BaseGenerator.py segan/models/core.py segan/models/model.py config.py weightG_fmt_converter.py not_functional/models/DataLoader/AudioDataset.py downsample get_input_sampling check_input_sampling main ArgParser main clean txt_clean_file main RunBuilder main DefaultRunManager Run Epoch DefaultTrainBuilder WaveGAN WaveGAN_standalone WaveGANDiscriminator WaveGANGenerator Reshape Transpose1dLayer DownSample1D UpSample1D Conv1D WaveResBlock PhaseRemove PhaseShuffle Sc09 AudioDataset Piano split_manage_data sample_generator get_all_audio_filepaths split_data batch_generator save_samples WavDataLoader Dataset wassertein_loss optimizers DefaultTrainer TrainingUtility WaveGan_GP BaseGANUtils creat_dump get_recursive_files load_wav save_models tocpu_all prevent_net_update require_net_update latent_space_interpolation accuracy2_ sample_audio wav_generator model_save accuracy_ parallel_models time_since visualize_audio model_load sample_buffer weights_init gradients_status sample_noise numpy_to_var Parameters visualize_loss update_optimizer_lr create_stream_reader print_net calc_gradient_penalty get_params torch_image_to_numpy_image save_samples model_load_ evaluate make_path record_costs rgb2gray print_all data_getters tocuda_all compute_and_record_batch_history save_avg_cost_one_epoch init_console_logger file_logger WaveGANUtils matplotlib_visualization plot_loss imshow plot_conv2d_weights inspect_data inspect_one_data CompositeEval wss llr uttname2spkid SSNR composite_helper eval_composite Additive lpcoeff ComposeAdditive denormalize_wave_minmax make_divN PESQ pre_emphasize abs_normalize_wave_minmax slice_index_helper abs_short_normalize_wave_minmax de_emphasize slice_signal_index SEDataset RandomChunkSEF0Dataset collate_fn SEH5Dataset slice_signal dynamic_normalize_wave_minmax normalize_wave_minmax RandomChunkSEDataset varlen_wav_collate VCDataset RMSE MCD AFPR linear_interpolation interpolation process_guia process_file main read_aco_file aco2wav write_aco_file wav2aco Conv1DResBlock pos_code LayerNorm DiscBlock Model Saver GBlock Discriminator Generator GSkip Generator1D wsegan_weights_init AEWSEGAN weights_init WSEGAN SEGAN GConv1DBlock CombFilter PostProcessingCombNet OctConv Windowing SincConv VirtualBatchNorm1d Self_Attn GDeconv1DBlock ResARModule forward_norm ResBlock1D build_norm_layer VirtualGConv1DBlock make_optimizer F0Evaluator compute_MAE KLD select_voiced get_grads compute_accuracy convert_wav SpectralNorm l2normalize decode int run get_input_sampling soundfile load_pretrained synthesis_path cuda pre_emphasize basename view default_timer test_files AEWSEGAN generate WSEGAN format glob eval write_sampling normalize_wave_minmax g_pretrained_ckpt enumerate h5 join read preemph isdir print write SEGAN len CompositeEval logfile open append sum test_wavs close mean load clean_wavs isnan glob join ckpt_dir print DataLoader SEH5Dataset l1_dec_step noisy_trainset l1_dec_epoch seed aewsegan h5_data_root SEDataset MSELoss save_freq to wsegan l1_weight clean_trainset manual_seed clean_valset noisy_valset d_pretrained_ckpt empty_cache train DefaultTrainBuilder train_experiments join write_wav format str make_path enumerate load astype pad randint abs max len append buffer_stream Streamer ShuffledMux int format error shuffle ceil tensor batch_generator len split_data get_all_audio_filepaths list keys values normal_ to join isdir endswith append listdir makedirs time floor show subplot stft amplitude_to_db colorbar title numpy figure waveplot specshow abs enumerate show format plot xlabel grid ylabel tight_layout title savefig figure legend zip sample_noise visualize_audio load pad abs max len randint astype len randint len load_wav sample_audio append Streamer ShuffledMux param_groups parameters gradients_status gradients_status Tensor to join Parameters make_path strftime args view size rand expand netD to join save state_dict join cuda cpu append numpy record_costs wassertein_loss tocpu_all batch_loss append float sum len isinstance Conv2d xavier_uniform_ bias modules weight constant_ Linear print print net mean topk type FloatTensor data max setFormatter addHandler StreamHandler Formatter DEBUG setLevel INFO FileHandler join list plot xlabel grid ylabel tight_layout savefig legend range len show str print title hist numpy modules weight enumerate show format print shape torch_image_to_numpy_image make_grid print imshow get_one_iter train show str rgb2gray squeeze add_subplot imshow shape figure annotate max range torch_image_to_numpy_image zeros size transpose CompositeEval reshape pesq int min cos pi log10 linspace round append sum max range wss sorted llr min SSNR PESQ trim_mos mean round len cos pi floor linspace abs round log max fft list exp ones log10 ceil append sum range concatenate int reshape zeros array int T squeeze cos pi range dot lpcoeff floor linspace append round log list ones append zeros sum array range append default_collate int tqdm append array range load int range append max abs astype int32 min astype int32 float max reshape zeros range zeros FloatTensor enumerate len mean array recall_score mean precision_score f1_score array sqrt range log append range linear_interpolation all ones copy shape zeros range int join rstrip format pack print loadtxt interpolation len savetxt unpack splitext array split f0_guia f0_file process_guia vf_file bin_mode process_file vf_guia gen_uv int format reshape unpack array len run run exp arange size transpose cos is_cuda unsqueeze item zeros to range sin data fill_ print named_parameters normal_ __name__ data print named_parameters xavier_uniform_ __name__ spectral_norm items format view print cat enumerate StepLR Adam RMSprop print exp size abs read_aco_file int16 name interpolation min iinfo astype write NamedTemporaryFile wav2aco exists | # SEGAN # SE-GAN # Speech Enhancement GAN # Speech Enhancement Generative Adversarial Network ### Thesis - Paper(in Turkish) https://tez.yok.gov.tr/UlusalTezMerkezi/TezGoster?key=fl0Kw4p1rmMDotyKRdYv1NC_jHlQf4_EkB366lPjbYSgMgBkdDEloOymzKUxe2_A </br> ### Presentation(in Turkish) https://www.youtube.com/watch?v=UMyHcdOsduU ### Pretrained Model Old SEGAN generator weights are released and can be downloaded in [this link](http://veu.talp.cat/seganp/release_weights/segan+_generator.ckpt). | 1,896 |
devendrachaplot/DeepRL-Grounding | ['imitation learning'] | ['Gated-Attention Architectures for Task-Oriented Language Grounding'] | env.py a3c_test.py utils/points.py models.py utils/doom.py env_test.py a3c_main.py constants.py a3c_train.py test ensure_shared_grads train GroundingEnv normalized_columns_initializer A3C_LSTM_GA weights_init set_doom_configuration get_world_coordinates pause_game get_doom_coordinates spawn_object get_agent_location split_object get_l2_distance DoomObject spawn_agent process_screen Grid generate_points generate data model save game_init seed GroundingEnv view step load_state_dict append state_dict format mean A3C_LSTM_GA eval softmax manual_seed info float long load join time print Variable dump_location reset zeros numpy array split parameters grad zip data ensure_shared_grads model zero_grad SGD game_init gather seed str list GroundingEnv view len tau load_state_dict append range state_dict num_steps log_softmax reversed A3C_LSTM_GA softmax manual_seed info clip_grad_norm float gamma long load join backward print Variable parameters pow reset zeros step array split size randn list fill_ size sqrt uniform_ prod __name__ set_render_decals set_render_messages RES_800X450 set_render_particles set_episode_timeout SPECTATOR visualize set_screen_format set_render_crosshair set_doom_scenario_path scenario_path MOVE_FORWARD RES_400X225 set_automap_buffer_enabled set_render_weapon TURN_RIGHT PLAYER set_mode set_render_minimal_hud TURN_LEFT set_depth_buffer_enabled set_episode_start_time set_window_visible set_labels_buffer_enabled set_render_hud interactive set_screen_resolution RGB24 add_available_button set_render_effects_sprites set_render_corpses get_game_variable USER4 get_world_coordinates USER3 send_game_command pause_game get_doom_coordinates range send_game_command get_doom_coordinates make_action range findall reverse transpose Grid range poisson generate | # Gated-Attention Architectures for Task-Oriented Language Grounding This is a PyTorch implementation of the AAAI-18 paper: [Gated-Attention Architectures for Task-Oriented Language Grounding](https://arxiv.org/abs/1706.07230)<br /> Devendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal, Ruslan Salakhutdinov<br /> Carnegie Mellon University Project Website: https://sites.google.com/view/gated-attention  ### This repository contains: - Code for training an A3C-LSTM agent using Gated-Attention - Code for Doom-based language grounding environment | 1,897 |
dgattiwsu/Res-CR-Net | ['semantic segmentation'] | ['Res-CR-Net, a residual network with a novel architecture optimized for the semantic segmentation of microscopy images'] | MODULES/Utils.py MODULES/Blocks.py MODULES/__init__.py Res-CR-Net_train.py MODULES/Losses.py MODULES/Constants.py MODULES/Generators.py MODULES/Networks.py Res-CR-Net_predict.py residual_convLSTM2D_block stem_split_3k residual_block_split_3k shrink_block _Paths _Seeds _Params test_generator_1 to_val_indices to_train_indices val_generator_1 to_one_hot_val val_generator_2 test_generator_2 to_one_hot_train train_generator_2 train_generator_1 tani_loss weighted_bce_dice_loss weighted_dice_loss tani_coeff cce_tani_loss weighted_dice_coeff weighted_cce_tani_loss dice_loss bce_dice_loss_2 other_metrics dice_coeff weighted_bce_tani_loss weighted_tani_loss cce_dice_loss tani_coeff_nc weighted_bce_loss weighted_tani_coeff cce_loss weighted_cce_loss weights dice_coeff_corr ResUNet ResUNet_CR Dense_ResNet_Atrous ResUNet_CR_Big ResNet_Atrous ResUNet_Big Very_Dense_ResNet_Atrous get_class_threshold overlay_mask_2 overlay_mask get_model_memory_usage uint64 uint64 transpose reduce_sum ones_like floor unique ceil enumerate zeros_like floor unique enumerate zeros_like pool2d unique zeros round enumerate zeros zeros_like unique enumerate dict flow_from_directory _Params ImageDataGenerator _Paths _Seeds dict flow_from_directory _Params ImageDataGenerator _Paths _Seeds dict flow_from_directory _Params ImageDataGenerator _Paths _Seeds dict flow_from_directory _Params ImageDataGenerator _Paths range _Seeds dict flow_from_directory _Params ImageDataGenerator _Paths range _Seeds dict flow_from_directory _Params ImageDataGenerator _Paths range _Seeds reciprocal ones_like greater reduce_sum reduce_mean cast pool2d less tensordot ones_like reduce_sum dice_coeff categorical_crossentropy dice_loss reduce_sum reduce_sum tani_coeff categorical_crossentropy tani_loss weights reduce_sum weights reduce_sum weighted_dice_coeff weighted_tani_coeff exp weights maximum log clip weighted_bce_loss weighted_dice_coeff weighted_tani_coeff weighted_bce_loss clip log weights clip log weights reduce_sum clip log categorical_crossentropy reduce_mean dice_loss sum max reduce_sum numpy MirroredStrategy MirroredStrategy MirroredStrategy MirroredStrategy MirroredStrategy MirroredStrategy MirroredStrategy deepcopy squeeze astype dstack max deepcopy squeeze astype dstack max expand_dim append round range int sum layers round | # Res-CR-Net Res-CR-Net, a residual network with a novel architecture optimized for the semantic segmentation of microscopy images. Res-CR-Net is a neural network featuring a novel FCN architecture, with very good performance in multiclass segmentation tasks of both electron (gray scale, 1 channel) and light microscopy (rgb color, 3 channels) images of relevance in the analysis of pathology specimens. Res-CR-Net offers some advantages with respect to other networks inspired to an encoder-decoder architecture, as it is completely modular, with residual blocks that can be proliferated as needed, and it can process images of any size and shape without changing layers size and operations. Res-CR-Net can be particularly effective in segmentation tasks of biological images, where the labeling of ground truth classes is laborious, and thus the number of annotated/labeled images in the training set is small. Res-CR-Net combines two types of residual blocks: CONV RES BLOCK. The traditional U-Net backbone architecture, with its the encoder-decoder paradigm, is replaced by a series of modified residual blocks, each consisting of three parallel branches of separable + atrous convolutions with different dilation rates, that produce feature maps with the same spatial dimensions as the original image. The rationale for using multiple-scale layers is to extract object features at various receptive field scales. Res-CR-Net offers the option of concatenating or adding the parallel branches inside the residual block before adding them to the shortcut connection. In our test, concatenation produced the best result. A Spatial Dropout layer follows each residual block. A slightly modified STEM block processes the initial input to the network. n CONV RES BLOCKS can be concatenated. LSTM RES BLOCK. A new type of residual block features a residual path with two orthogonal bidirectional 2D convolutional Long Short Term Memory (LSTM) layers. For this purpose, the feature map 4D tensor emerging from the previous layer first undergoes a virtual dimension expansion to 5D tensor (i.e. from [4,260,400,3] [batch size, rows, columns, number of classes] to [4,260,400,3,1]). In this case the 2D LSTM layer treats 260 consecutive tensor slices of dimensions [400,3,1] as the input data at each iteration. Each slice is convolved with a single filter of kernel size [3,3] with ‘same’ padding, and returns a slice of the exact same dimension. In one-direction mode the LSTM layer returns a tensor of dimensions [4,260,400,3,1]. In bidirectional mode it returns a tensor of dimensions [4,260,400,3,2]. The intuition behind using a convolutional LSTM layer for this operation lies in the fact that adjacent image rows share most features, and image objects often contain some level of symmetry that can be properly memorized in the LSTM unit. Since the same intuition applies also to image columns, the expanded feature map of dimensions [4,260,400,3,1] from the earlier part of the network is transposed in the 2nd and 3rd dimension to a tensor of dimensions [4,400,260,3,1]. In this case the LSTM layer processes 400 consecutive tensor slices of dimensions 260,3,1 as the input data at each iteration, returning a tensor of dimensions [4,400,260,3,2] which is transposed again to [4,400,260,3,2]. The two LSTM output tensors are then added and the final dimension is collapsed by summing its elements, leading to a final tensor of dimensions [4,260,400,3] which is added to the shortcut path. m LSTM RES BLOCKS can be concatenated. A LeakyReLU activation is used throughout Res-CR-Net. After the last residual block a softmax activation layer is used to project the feature map into the desired segmentation. Res-CR-Net is currently designed to work either with: 1) a single rgb mask/image of 3 or 4 binary channels, corresponding to 3-4 classes. 2) a single thresholded grayscale mask/image (i.e., a mask with 3 classes would have the regions corresponding to the three categories thresholded at 0,128,255 values). In this case, gray scale masks are first converted to sparse categorical with each gray level corresponding to a different index (i.e., [0, 128, 255] goes to [0, 1, 2]). Then, pixels identified by indices are converted to one-hot vectors. | 1,898 |
dgchachlakis/The-exact-solution-to-rank-1-L1-norm-Tucker2-decomposition | ['combinatorial optimization'] | ['The Exact Solution to Rank-1 L1-norm TUCKER2 Decomposition'] | utils/ymatrix.py utils/__init__.py algorithms/__init__.py utils/mysign.py algorithms/exact.py utils/xmatrix.py utils/computeCandidates.py algorithms/exactpoly.py utils/decimal2binary.py examples.py exact exactpoly computeCandidates decimal2binary mysign xmatrix ymatrix svd list eye decimal2binary xmatrix kron range svd ymatrix range eye xmatrix kron computeCandidates combinations list svd tuple copy set flatten add decimal2binary union range astype int8 zeros range len zeros range flatten zeros range | ## The exact solution to rank-1 L1-norm Tucker2 decomposition ## In this repo we implent algorithms for the exact solution to rank-1 L1-norm Tucker2 decompostion of 3-ways tensors as they have been presented in [[1]](https://ieeexplore.ieee.org/document/8248754). Formally, given a collection of matrix measurements $\mathbf X_1, \mathbf X_2,\ldots, \mathbf X_N \in \mathbb R^{D \times M}$, the scripts provided solve exactly the followig problem:  --- * IEEEXplore article: https://ieeexplore.ieee.org/document/8248754 * arXiv Preprint: https://arxiv.org/abs/1710.11306 * Source code: https://github.com/dgchachlakis/The-exact-solution-to-rank-1-L1-norm-Tucker2-decomposition --- **Questions/issues** | 1,899 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.