repo
stringlengths 8
116
| tasks
stringlengths 8
117
| titles
stringlengths 17
302
| dependencies
stringlengths 5
372k
| readme
stringlengths 5
4.26k
| __index_level_0__
int64 0
4.36k
|
---|---|---|---|---|---|
baiyangwang/emgan | ['word embeddings'] | ['Generative Adversarial Nets for Multiple Text Corpora'] | reuters/word2vec.py cnn/LDA.py 20news/plain_cluster.py cnn/plain_cluster.py reuters/gan_full.py reuters/basic.py reuters/gan_cluster.py time/gan_cluster.py 20news/gan_full.py time/basic.py cnn/gan_full.py 20news/LDA.py 20news/word2vec.py reuters/LDA.py 20news/basic.py time/word2vec.py time/output.py time/LDA.py time/plain_cluster.py cnn/basic.py cnn/word2vec.py cnn/output.py reuters/plain_cluster.py 20news/gan_cluster.py cnn/gan_cluster.py time/gan_full.py HiddenLayer row_normalize MLP rand LogisticRegression GAN Generator GAN GAN HiddenLayer row_normalize MLP rand LogisticRegression GAN Generator GAN GAN key_for_max_val HiddenLayer row_normalize MLP rand LogisticRegression GAN Generator GAN GAN HiddenLayer row_normalize MLP rand LogisticRegression GAN Generator GAN GAN key_for_max_val range | # emgan codes for "Generative Adversarial Nets for Multiple Text Corpora" First unzip the files in each data set under data/cleaned or data/; then run word2vec.py; then run gan_cluster.py (weGAN) or gan_full.py (deGAN) | 1,500 |
bakarov/2ch2vec | ['word embeddings'] | ['Automated Detection of Non-Relevant Posts on the Russian Imageboard "2ch": Importance of the Choice of Word Representations'] | aist2017/supplementary_scripts/anno2ch/re_annotator.py aist2017/supplementary_scripts/word2vec_train_model.py aist2017/supplementary_scripts/anno2ch/anno2ch.py aist2017/supplementary_scripts/data_miner/data_miner.py aist2017/supplementary_scripts/tfidf_train_model.py aist2017/paper/utils/string_utils.py aist2017/paper/utils/stopwords_maker.py aist2017/supplementary_scripts/make_corpus.py aist2017/paper/embed_utils.py aist2017/supplementary_scripts/glove_train_model.py cosine_sim ugly_normalize wv Word2VecF Swivel get_adagram_sense_prob morph_parse punctuate_word punctuate_sent remove_html make_tokens remove_leading cut make_alpha train_model serialize_model load_data save_corpus_to_text merge_corpora picklize_corpus train_model serialize_model load_data train_model serialize_model load_data punctuate_word punctuate_sent get_annotate_data remove_html make_df get_thread_names remove_leading load_page show_thread_names start_annotating cut make_alpha punctuate_word punctuate_sent serialize_comments load_threads load_comments cut sqrt sum word_sense_probs compile compile compile compile compile compile add_dictionary fit dictionary matrix Corpus Glove join makedirs format print save_corpus_to_text apply dropna DataFrame len TfidfVectorizer fit_transform format print len save json print enumerate lower json int input print eval append to_csv DataFrame reset_index from_csv json print json join print from_csv to_csv DataFrame drop_duplicates makedirs | # Text analysis on 2ch! Data: https://yadi.sk/d/Md3_CZ8I3NbqyV ``` @inproceedings{bakarov2017automated, title={Automated Detection of Non-Relevant Posts on the Russian Imageboard “2ch”: Importance of the Choice of Word Representations}, author={Bakarov, Amir and Gureenkova, Olga}, booktitle={International Conference on Analysis of Images, Social Networks and Texts}, pages={16--21}, year={2017}, organization={Springer} | 1,501 |
bakarov/cross-lang-embeddings | ['word embeddings'] | ['The Limitations of Cross-language Word Embeddings Evaluation'] | current/translate.py set_translator make_translated_dataset translate Lemmatizer set_key Translater set_from_lang set_to_lang set_text replace zip | # The Limitations of Cross-language Word Embeddings Evaluation The aim of this work is to explore the possible limitations of existing methods of cross-language word embeddings evaluation, addressing the lack of correlation between intrinsic and extrinsic cross-language evaluation methods. To prove this hypothesis, we construct English-Russian datasets for extrinsic and intrinsic evaluation tasks and compare performances of 5 different cross-language models on them. The results say that the scores even on different intrinsic benchmarks do not correlate to each other. We can conclude that the use of human references as ground truth for cross-language word embeddings is not proper unless one does not understand how do native speakers process semantics in their cognition. ## Papers related to the project: 1. Bakarov, A., Suvorov. R., Sochenkov, I. (2018, June). [The Limitations of Cross-language Word Embeddings Evaluation](http://aclweb.org/anthology/S18-2010). In Proceedings of the 7th Joint Conference on Lexical and Computational Semantics (* SEM 2018). ``` @inproceedings{bakarov2018limitations, title={The Limitations of Cross-language Word Embeddings Evaluation}, author={Bakarov, Amir and Suvorov, Roman and Sochenkov, Ilya}, booktitle={Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics}, pages={94--100}, | 1,502 |
bakirillov/capsules | ['gaussian processes', 'data augmentation', 'anomaly detection'] | ['Kernelized Capsule Networks', 'HitNet: a neural network with capsules embedded in a Hit-or-Miss layer, extended with hybrid data augmentation and ghost capsules', 'Image anomaly detection with capsule networks and imbalanced datasets', 'Exploring Deep Anomaly Detection Methods Based on Capsule Net'] | capsules.py PrimaryCapsuleLayer CentripetalLoss SecondaryCapsuleLayer anomaly_scores RegularizingDecoder mask_hom squash CapsuleLoss HitOrMissLayer make_y normality_scores sum index_select eye stack | # capsules Implementation of capsule network with dynamic routing that actually works. New features: 1. modification of the loss for anomaly detection based on https://arxiv.org/pdf/1909.02755.pdf 2. anomaly score based on https://arxiv.org/pdf/1909.02755.pdf 3. normality scores based on https://arxiv.org/pdf/1907.06312.pdf 4. HitOrMiss capsules and Centripetal Loss from https://arxiv.org/pdf/1806.06519.pdf 5. Kernelized Capsule Networks (https://arxiv.org/abs/1906.03164) example with GPytorch - partially done, reconstruction doesn't work yet TODO: 1. add CIFAR10 example | 1,503 |
balasrini33/JanossyPooling | ['stochastic optimization'] | ['Janossy Pooling: Learning Deep Permutation-Invariant Functions for Variable-Size Inputs'] | graphsage/train_janossy_gs.py graphsage/preprocess_ppi.py graphsage/janossy_gs/__init__.py graphsage/predict_janossy_gs.py arithmetic_tasks/train.py graphsage/janossy_gs/data_loaders.py graphsage/janossy_gs/graph_models.py arithmetic_tasks/__init__.py graphsage/training_utils.py graphsage/__init__.py arithmetic_tasks/models.py TextModels image_dataset_construction text_dataset_construction determine_vocab_size construct_task_specific_output unison_shuffled train_text valid_argument_check permute func main parse_args janossy_text_input_construction make_inference_paths preprocess_ppi set_logger RunningAverage get_n_params Params save_dict_to_json parse_args build_out_path load_citation load_ppi load_cora load_pubmed load_reddit MeanAggregator Encoder JanossyGraphSage LSTMAggregator SupervisedGraphSage add_argument ArgumentParser print int exit combinations list sort append range len int list randint construct_task_specific_output tqdm zeros range janossy_text_input_construction unique permutation len zero_grad mean_absolute_error TextModels str Adam apply_along_axis unison_shuffled append to range zeros int time backward text_dataset_construction print mean_squared_error step loss len int str neurons learning_rate batch_size determine_vocab_size iterations lower train_text hidden_layers valid_argument_check parse_args range str replace load write_gpickle format node_link_graph isinstance print fit remove_node nodes dict save edges transform StandardScaler array open setFormatter getLogger addHandler StreamHandler Formatter setLevel INFO FileHandler sum lower typical_epochs pop items sorted list str OrderedDict zip keys values zeros empty defaultdict zeros empty defaultdict load get defaultdict nodes set dict read_gpickle len | # Janossy Pooling ### Authors: Ryan L. Murphy and Balasubramaniam Srinivasan ## Overview: This is the code for [Janossy Pooling: Learning Deep Permutation-Invariant Functions for Variable-Size Inputs](https://arxiv.org/abs/1811.01900). We evaluate different Janossy Pooling models for tasks similar to those found in [Deep Sets](https://github.com/manzilzaheer/DeepSets) and [GraphSAGE](https://github.com/williamleif/GraphSAGE). Our implementation follows these, as well as the reference [PyTorch implementation of GraphSAGE](https://github.com/williamleif/graphsage-simple/). The latter repo also contains two datasets that we use. The first set of tasks is to perform arithmetic on sequences of integers: sum, range, unique sum, unique count, and variance. Note that these functions are all permutation-invariant (symmetric). The second set of tasks learns vertex embeddings in graphs for vertex classification. The data are described below. Please see the supplementary section for a brief description and summary of the code. ## Requirements | 1,504 |
balbok0/bayes-nn-qsh | ['gaussian processes'] | ['Bayesian Deep Learning on a Quantum Computer'] | generate_dataset.py plot_result.py | # Quantum Cluster Kata This is the alpha verison of Unsupervised ML Kata for Microsoft Quantum Projects. Uses algorithms described in: - https://link.springer.com/article/10.1007/s10994-012-5316-5 - https://arxiv.org/pdf/1401.2142.pdf | 1,505 |
balikasg/WassersteinRetrieval | ['information retrieval'] | ['Cross-lingual Document Retrieval using Regularized Wasserstein Distance'] | wass_funcs.py extract_embeddings_conceptNet.py emd.py mrr_precision_at_k WassersteinDistances load_embeddings clean_corpus_using_embeddings_vocabulary zeros enumerate len join word_tokenize astype difference lower append float enumerate join splitlines split | # Wasserstein For Documents Implementation for our ECIR 2018 paper "Fast Cross-lingual Document Retrieval using Regularized Wasserstein Distance". The code in the repository implements the `Wass` and `Entro_Wass` models of our paper. The implementations heavily rely on: - [Python Optimal Transport (POT)](https://github.com/rflamary/POT) for calculating the wasserstein distances - [Scikit-learn](http://scikit-learn.org/stable/) for extending the nearest neighbors classifier to provide a generic framework for using out method. # Running the code To run the code, one first needs to download the [Numberbatch](https://github.com/commonsense/conceptnet-numberbatch) embeddings we used in this paper. We provide a script to download the embeddings and extract those for a subset of languages, e.g., English and French. To do that, first clone the repository, move the the directory and execute the script: ``` git clone https://github.com/balikasg/WassersteinRetrieval | 1,506 |
bamps53/SeesawLoss | ['instance segmentation', 'semantic segmentation'] | ['Seesaw Loss for Long-Tailed Instance Segmentation'] | seesaw_loss.py normalized_linear.py NormalizedLinear SeesawLossWithLogits DistibutionAgnosticSeesawLossWithLogits | # Seesaw Loss This is unofficial pytorch implementation for SeesawLoss, which was proposed by Jiaqi Wang et. al. in [their technical report for LVIS workshop at ECCV 2020](https://arxiv.org/pdf/2008.10032.pdf). | 1,507 |
bangxiangyong/bae-ood-images | ['out of distribution detection'] | ['Bayesian Autoencoders: Analysing and Fixing the Bernoulli likelihood for Out-of-Distribution Detection'] | train_models_main.py image_publication_pcc_ll.py test_models_images.py _train_models_images.py image_publication_ism_hist.py set_legend_marksersize log_gaussian_loss log_bernoulli_loss ax_kde log_bernoulli_loss_np mse calc_image_similarity calc_proportion_zeros log_gaussian_loss log_bernoulli_loss set_legend_marksersize str_deci plot_maximum_LL log_bernoulli_loss_np train_model_images nan_to_num nan_to_num flatten_np flatten_np legendHandles range len flatten convert_image_int mean append range method plot plot xlabel tight_layout ylabel figure reshape DataLoader BAE_Ensemble SVHN str FashionMNIST run_test_model infer_decoder VAE pprint run_auto_lr_range sum BAE_MCDropout replace Compose BAE_VI Encoder Autoencoder CIFAR10 decoder_sigma_enabled MNIST int losses print DenseLayers save_bae_model isnan fit | # Bayesian Autoencoders for Out-of-Distribution (OOD) Detection Code complementary to our paper "Bayesian Autoencoders: Analysing and Fixing the Bernoulli likelihood for Out-of-Distribution Detection" ## Models and likelihood - Bayes by Backprop, Anchored ensembling, MC-Dropout - Bernoulli, Continous Bernoulli, Gaussian ## Dataset pairs (in-distribution vs OOD) - FashionMNIST vs MNIST - MNIST vs FashionMNIST - SVHN vs CIFAR10 - CIFAR10 vs SVHN | 1,508 |
baodaiqin/UGDSRE | ['relation extraction'] | ['Two Training Strategies for Improving Relation Extraction over Universal Graph'] | ugdsre_biomedical/train_ug_proposed.py ugdsre_biomedical/test_ug_proposed.py ugdsre_biomedical/train_ug_pretrain.py initialize_nyt10_part2.py ugdsre_nyt10/network_pretrain_rank.py ugdsre_nyt10/network_pretrain.py ugdsre_nyt10/metrics.py initialize_biomedical_part1.py ugdsre_biomedical/network_ug.py ugdsre_nyt10/kg_dataset_transe.py ugdsre_nyt10/test_ug_proposed.py ugdsre_nyt10/train_ug_proposed.py ugdsre_biomedical/network_pretrain.py ugdsre_biomedical/network_pretrain_rank.py ugdsre_biomedical/kg_dataset_transe.py ugdsre_nyt10/test_baseline.py initialize_biomedical_part2.py initialize_nyt10_part1.py ugdsre_nyt10/test_ug.py ugdsre_biomedical/test_baseline.py ugdsre_nyt10/network_ug.py ugdsre_biomedical/train_ug_ranking_pretrain.py ugdsre_biomedical/test_ug.py ugdsre_biomedical/metrics.py ugdsre_nyt10/train_ug_pretrain.py ugdsre_nyt10/train_ug_ranking_pretrain.py metrics metrics | # UGDSRE Codes and datasets for our paper "Two Training Strategies for Improving Relation Extraction over Universal Graph" ([PDF](https://arxiv.org/pdf/2102.06540.pdf)) ## Overview of our framework <img src="overview_of_ugdsre.png" width="700"> ## Dependencies - python = 2.x - tensorflow = 1.9.0 - numpy - sklearn ## Data preprocessing | 1,509 |
baojianzhou/graph-sto-iht | ['stochastic optimization'] | ['Stochastic Iterative Hard Thresholding for Graph-structured Sparsity Optimization'] | exp_sr_test02.py exp_sr_test07.py exp_sr_test06.py exp_bc_dp.py exp_sr_test01.py exp_bc_show_genes.py setup.py exp_sr_test04.py algo_wrapper/__init__.py exp_sr_test03.py algo_wrapper/base.py algo_wrapper/c/__init__.py exp_bc_run.py exp_sr_test05.py summary_data test_03 get_single_data generate_original_data get_query raw_data_process main map_entrez_gene_name run_test get_related_genes algo_sto_iht_backtracking run_test expit get_single_data summarize_data logit_loss_bl run_parallel_tr run_parallel_te algo_graph_sto_iht_backtracking logistic_predict show_test main log_logistic show_genes algo_head_tail_bisearch logit_loss_grad_bl run_single_test show_genes_nonconvex show_detected_genes get_single_data show_genes_convex main simu_grid_graph random_walk algo_graph_iht algo_iht algo_sto_iht show_test generate_figures main algo_graph_sto_iht print_helper sensing_matrix algo_head_tail_bisearch run_test run_single_test simu_grid_graph show_test_2 random_walk run_test_diff_eta run_test_diff_b algo_sto_iht run_single_test_diff_b run_single_test_diff_eta show_test main algo_graph_sto_iht print_helper sensing_matrix algo_head_tail_bisearch simu_grid_graph random_walk algo_graph_iht algo_iht algo_sto_iht summarize_results show_test main algo_graph_sto_iht print_helper sensing_matrix algo_head_tail_bisearch run_test run_single_test simu_grid_graph cv_iht cv_graph_iht algo_graph_iht algo_iht cv_sto_iht algo_sto_iht cv_graph_sto_iht summarize_results show_test main algo_graph_sto_iht print_helper algo_head_tail_bisearch run_test run_single_test cv_sto_iht algo_graph_sto_iht algo_head_tail_bisearch cv_iht algo_graph_iht algo_iht cv_graph_sto_iht summarize_results print_helper main sensing_matrix run_test run_single_test simu_grid_graph algo_niht cv_graph_iht algo_graph_cosamp algo_sto_iht show_test algo_cosamp cv_sto_iht show_resized_figures algo_graph_sto_iht get_img_data algo_head_tail_bisearch cv_iht algo_graph_iht algo_iht cv_graph_sto_iht summarize_results print_helper main run_test run_single_test simu_grid_graph algo_niht cv_graph_iht algo_graph_cosamp algo_sto_iht show_test algo_cosamp simu_grid_graph random_walk show_run_time_algo run_test_diff_b test_on_fix_s algo_sto_iht run_single_test_diff_b test_on_fix_n show_run_time_proj algo_graph_sto_iht main sensing_matrix algo_head_tail_bisearch test_logistic m_print logit_loss_bl _grad_w least_square_predict logistic_predict log_logistic logit_loss_grad_bl test_expit expit node_pre_rec_fm test_random_walk logit_loss_grad random_walk main sensing_matrix simu_grid_graph gen_test_case logistic auc_node_fm dict number_connected_components dump asarray number_of_nodes get_related_genes print Graph extend dict number_of_edges open loadmat range len dump print zeros range open load dump asarray print extend dict enumerate open load print len mean loadmat open load list dump querymany print write close exit MyGeneInfo enumerate open dict print connected_component_subgraphs add_edge list asarray number_connected_components Graph print len index set dict add append range enumerate dump get_single_data print index set dict open append union range len summary_data exp zeros_like where dot shape expit ones exp log zeros_like where expit asarray T zeros_like where dot log_logistic sum dot log_logistic sum where wrap_head_tail_bisearch zeros_like len seed int list norm zeros_like logit_loss_bl copy shape enumerate append randint float sum algo_head_tail_bisearch range logit_loss_grad_bl len seed int list norm logit_loss_bl shape enumerate randint float sum range logit_loss_grad_bl len algo_sto_iht_backtracking print dict algo_graph_sto_iht_backtracking logistic_predict accuracy_score zeros float sum roc_auc_score len join product print min close map dict append Pool range len join close map dict Pool dump get_single_data run_parallel_tr divide copy run_parallel_te dict nan_to_num tile std range load dict open append len summarize_data argmax open argmin append union range set mean nonzero float enumerate load join print extend split std len dump get_single_data print index dict append range open int list show_test mkdir range run_test dump get_single_data print index dict range open load list dump print dict keys range open subplots axis tick_params set_weight open list FontProperties nodes savefig setp intersection append range add_edge Graph get_xticklabels close set draw_spring keys add_node load enumerate minimum_spanning_tree get_yticklabels print rc subplots_adjust dict show_detected_genes seed asarray print ones uniform append range len normal norm reshape dot sqrt len seed list print set choice add len time norm transpose dot shape range seed int time list norm transpose dot shape randint range int time norm transpose copy dot algo_head_tail_bisearch range len seed int time list norm transpose copy dot shape randint algo_head_tail_bisearch range len print seed algo_graph_iht algo_iht algo_sto_iht algo_graph_sto_iht append print_helper sensing_matrix fmin round Pool seed list map append normal product close int time simu_grid_graph join index zeros subplots arange grid set_visible max list set_title set_xlabel title savefig legend product plot replace set_position close rc set_yticks subplots_adjust set_xticks set_ylabel zeros axis list draw_networkx_nodes set_major_locator NullLocator set_figheight savefig append draw_networkx_edges range gcf add_edge normal product margins close enumerate add_node print axes rc subplots_adjust figure set_figwidth join asarray arange print exit generate_figures append append seed algo_sto_iht algo_graph_sto_iht append print_helper sensing_matrix seed algo_sto_iht algo_graph_sto_iht append print_helper sensing_matrix Pool seed list map log10 append range normal product close mean join time simu_grid_graph int print index dict zeros len Pool seed list map log10 append range normal product close mean join time simu_grid_graph int print index zeros len subplots set_yticklabels grid set_visible open set_title set_xlabel savefig legend append range get_position asarray replace plot set_position set_xlim close enumerate load join print rc set_yticks extend subplots_adjust set_ylabel set_xticks jet get_legend_handles_labels set_ylim len set_yticklabels get_position asarray set_xlim jet get_legend_handles_labels set_ylim dump run_test_diff_b run_test_diff_eta open zip sqrt norm reshape dot int dump print index mean open append round len summarize_results norm algo_iht argmin dict dot zeros enumerate norm product algo_sto_iht dict dot zeros enumerate len norm algo_graph_iht argmin dict dot zeros enumerate len norm product dict dot algo_graph_sto_iht zeros enumerate len zeros_like cv_iht cv_graph_iht cv_sto_iht cv_graph_sto_iht update zip load extend dict zeros axhline ones axvline imshow setp normal nan get_yticklabels time norm __eq__ transpose set dot shape range time norm zeros_like transpose union1d dot pinv algo_head_tail_bisearch range time norm zeros_like transpose union1d dot shape pinv range normal algo_niht reshape algo_graph_cosamp dot sqrt algo_cosamp text reshape BILINEAR dict resize append enumerate len seed subplots set_title print reshape rc set_yticks BILINEAR close subplots_adjust imshow set_xticks savefig resize enumerate exit get_img_data GridSpec xticks yticks subplot get_img_data reshape figure get_img_data seed show_resized_figures dump zip open append int run_test_diff_b mkdir append int run_test_diff_b mkdir subplots grid open list set_title set_xlabel savefig legend append range asarray plot mean zip load int print rc subplots_adjust set_ylabel set_xticks len subplots grid open list set_title set_xlabel savefig legend append range asarray plot mean zip load int print rc subplots_adjust set_ylabel set_xticks len test_on_fix_s append dot range len expit T dot zeros sum expit T dot zeros sum float len print exit count_nonzero str format print len str write close open range len print expit asarray print asarray print simu_grid_graph random_walk test_logistic simu_grid_graph test_random_walk test_expit | ## Stochastic IHT for Graph-structured Sparsity Optimization ## Overview Welcome to the repository of GraphStoIHT! This repository is only for reproducing all experimental results shown in our paper. To install it via pip, please try [sparse-learn](https://github.com/baojianzhou/sparse-learn). Our work is due to several seminal works including [StoIHT](https://ieeexplore.ieee.org/abstract/document/8025727), [AS-IHT](http://papers.nips.cc/paper/6483-fast-recovery-from-a-union-of-subspaces), and [GraphCoSaMP](http://people.csail.mit.edu/ludwigs/papers/icml15_graphsparsity.pdf). More details of GraphStoIHT can be found in: "Baojian Zhou, Feng Chen, and Yiming Ying, | 1,510 |
baojianzhou/sparse-auc | ['imbalanced classification'] | ['Stochastic Hard Thresholding Algorithms for AUC Maximization'] | baselines/liblinear-2.30/python/liblinear.py baselines/liblinear/liblinear.py test_simu.py baselines/liblinear/commonutil.py baselines/icml18_fsauc/auc_python/get_idx.py baselines/icml18_fsauc/auc_python/auc_fs.py baselines/liblinear/liblinearutil.py baselines/icml18_fsauc/auc_python/exp_.py baselines/liblinear-2.30/python/commonutil.py test_high_dim_leukemia.py algo_wrapper/algo_wrapper.py test_high_dim_colon.py baselines/liblinear-2.30/python/liblinearutil.py cv_spam_l2 cv_sht_auc cv_spam_l1l2 show_auc_scores cv_spam_l1 cv_sto_iht summary_feature_results show_auc process_data_20_colon cv_solam cv_fsauc show_figure4_a main summary_auc_results cv_hsg_ht cv_spam_l2 cv_sht_am cv_spam_l1l2 run_methods cv_spam_l1 cv_sto_iht summary_feature_results show_figure4_b cv_solam cv_fsauc main summary_auc_results cv_hsg_ht process_data_21_leukemia show_figure7 cv_sto_iht run_diff_b cv_solam conv run_ms show_figure1 run_para_s run_testing test_single_3 cv_sht_am cv_spam_l1l2 test_spam_l1l2 node_pre_rec_fm show_figure3_a run_diff_s _gen_dataset_00_simu show_figure2 test_spam_l2 run_diff_ratio cv_fsauc test_fsauc test_spam_l1 cv_spam_l2 test_hsg_ht run_all_model_selection test_single_2 test_sht_am show_table1 show_result_01_2 show_figure3_b run_conv test_solam cv_spam_l1 show_figure6 test_sto_iht test_single cv_hsg_ht algo_test fpr_tpr_auc algo_sparse_solam_cv algo_sparse_solam algo_solam_py algo_solam algo_solam_cv algo_da_solam_cv algo_da_solam ProjectOntoL1Ball auc_fs get_idx csr_scale svm_read_problem csr_find_scale_param evaluations evaluations_scipy fillprototype parameter model print_null csr_to_problem_nojit problem gen_feature_nodearray csr_to_problem csr_to_problem_jit genFields toPyModel feature_node train save_model load_model predict csr_scale svm_read_problem csr_find_scale_param evaluations evaluations_scipy fillprototype parameter model print_null csr_to_problem_nojit problem gen_feature_nodearray csr_to_problem csr_to_problem_jit genFields toPyModel feature_node train save_model load_model predict list permutation dump print len split open range enumerate KFold asarray print dict c_algo_sht_auc empty range roc_auc_score asarray c_algo_sto_iht print dict empty range roc_auc_score c_algo_hsg_ht asarray print dict empty roc_auc_score asarray arange product c_algo_spam print dict empty roc_auc_score asarray arange product c_algo_spam print dict empty roc_auc_score asarray arange product c_algo_spam print dict empty roc_auc_score asarray c_algo_solam product arange print dict empty roc_auc_score c_algo_fsauc asarray arange product print dict empty roc_auc_score load list dump print dict open keys enumerate load list dump print len set dict intersection float keys enumerate open subplots set_yticklabels grid set_visible max values open list set_xlabel savefig legend append plot close mean enumerate load print rc set_yticks set_ylabel std set_ylim len load list subplots plot rc set_xlabel close mean enumerate set_visible set_ylabel savefig legend range values open subplots grid set_visible values open list set_xlabel savefig legend append plot set_xticklabels close mean enumerate load rc set_ylabel set_xticks load join list dump product close map show_auc show_figure4_a append Pool range open list permutation dump print len split open range enumerate KFold asarray print dict c_algo_sht_auc empty range roc_auc_score load join list dump product close map append Pool range open subplots set_yticklabels grid set_visible max values open list set_xlabel savefig legend append plot close mean enumerate load print rc set_yticks set_ylabel std set_ylim len subplots grid set_visible values open list set_xlabel savefig legend append plot set_xticklabels close mean enumerate load rc set_ylabel set_xticks run_methods show_figure7 show_figure4_b permutation open len range KFold dump asarray mean tile enumerate norm print reshape divide dict nan_to_num zeros std split float len count_nonzero flush time list range mean split zeros float enumerate KFold load asarray c_algo_solam print len set dict intersection float empty range flush open count_nonzero flush time list range mean split zeros float enumerate KFold load asarray c_algo_spam print len set dict intersection float empty range flush open count_nonzero flush time list range mean split zeros float enumerate KFold load asarray c_algo_spam print len set dict intersection float empty range flush open count_nonzero flush time list range mean split zeros float enumerate KFold load asarray c_algo_spam print len set dict intersection float empty range flush open count_nonzero flush time list range mean split zeros float enumerate KFold load c_algo_fsauc asarray print len set dict intersection float empty range flush open count_nonzero flush time list product mean split zeros float enumerate KFold load flush asarray print set dict intersection empty range c_algo_sht_auc open count_nonzero flush time list product mean split zeros float enumerate KFold load asarray c_algo_sto_iht print set dict intersection empty range flush open count_nonzero flush list time product mean enumerate split zeros float range KFold load c_algo_hsg_ht asarray print set dict intersection empty range flush open flush dump asarray print dict mean append empty max range c_algo_sht_auc open subplots set_yticklabels grid set_visible open list set_xlabel savefig legend range plot set_xticklabels close enumerate load join int rc set_ylabel set_xticks len join product close map append Pool join dump product close map dict append Pool range open join list dump product close map append Pool range open count_nonzero list asarray arange product print mean enumerate roc_auc_score split zeros float empty range c_algo_sht_auc KFold load dump asarray dict mean zeros empty range c_algo_sht_auc open load c_algo_hsg_ht asarray list product c_algo_sto_iht print dict roc_auc_score zeros empty range c_algo_sht_auc open load show subplots plot set_xticklabels rc grid set_xlabel close set_visible set_ylabel set_xticks savefig legend enumerate open load list asarray product c_algo_sto_iht print dict roc_auc_score zeros empty range c_algo_sht_auc open load show list subplots plot set_xticklabels rc grid len set_xlabel close set_visible set_ylabel savefig legend range enumerate open join print lstrip append enumerate subplots grid set_visible open show list set_title set_xlabel savefig legend append range product plot close mean enumerate load join rc set_yticks subplots_adjust set_ylabel set_xticks set_ylim subplots grid set_visible open set_title set_xlabel savefig legend append range plot close mean enumerate load join rc subplots_adjust set_ylabel set_xticks subplots grid set_visible open list set_title set_xlabel savefig legend append range product plot close mean enumerate load join print rc subplots_adjust set_ylabel set_xticks show asarray c_algo_sto_iht plot print empty range roc_auc_score show c_algo_hsg_ht asarray plot print empty range roc_auc_score asarray print c_algo_sht_auc append empty range roc_auc_score list asarray c_algo_solam product arange print range split zeros empty enumerate roc_auc_score KFold c_test reshape asarray print int time asarray c_algo_solam float asanyarray int list asarray c_algo_solam arange len dict enumerate mean split zeros float range roc_auc_score KFold int time asarray c_algo_sparse_solam float asanyarray list asarray arange print len dict dot enumerate split c_algo_sparse_solam zeros range roc_auc_score KFold asanyarray c_algo_da_solam asarray list c_algo_da_solam asarray arange print len dict enumerate mean split zeros float range roc_auc_score KFold norm zeros_like print dot sqrt zeros float range len dot roc_curve roc_auc_score auc int norm zeros_like ProjectOntoL1Ball inner min log shape log2 sqrt zeros abs range len cumsum abs maximum sign zeros permutation arange int frombuffer csr_matrix open append float array enumerate split mean sum len len zip flatten shape print print reshape csr_matrix getnnz dot shape vstack resize diags genFields list sorted isinstance keys range enumerate len range slice range data csr_to_problem_nojit indptr copy indices rowptr csr_to_problem_jit empty nnz genFields genFields genFields contents print toPyModel _cstr _cstr c_double flag_p_specified toPyModel find_parameters p print_func C flag_cross_validation set_print_string_function check_parameter set_bias parameter nr_fold isinstance flag_find_parameters print bias problem flag_C_specified cross_validation evaluations predict_probability tocsr is_probability_model get_nr_feature range feature_node is_regression_model slice solver_type ascontiguousarray get_nr_class predict_values gen_feature_nodearray info int bias split evaluations len | # sparse-auc This repository implements [Stochastic Hard Thresholding Algorithms for AUC Maximization](https://www.computer.org/csdl/proceedings-article/icdm/2020/831600a741/1r54yrYvEmA) by Yang et al., 2020. In a Python 2.7 enrivoment Change the directory in your favor in `build.sh` `test_simu.py` `test_high_dim_colon.py` `test_high_dim_leukemia.py` Run `build.sh` to build Datasets are available at https://drive.google.com/drive/folders/1qdeFB7OEZgs4FqBjASKH1FPiyCoz9mcw?usp=sharing | 1,511 |
baumanndominik/identifying_causal_structure | ['causal inference'] | ['Identifying Causal Structure in Dynamical Systems'] | caus_id_GP.py util.py examples.py caus_id_linear.py predict_mmd real_mmd sys_id_data check_struct GPmodel create_multi_tank_system go_to_init caus_exp get_test_statistic caus_id simulate_system create_sys_id_data sys_id sys_id_loc synthetic_example tank_system mult_tank_system set_point_ctrl dlqr rk4 rbf reshape_pt1 mmd2 zeros uniform enumerate dynamics deepcopy hstack indep_var delete flatten mmd2 zeros std range enumerate len deepcopy reshape dynamics hstack vstack mmd2 zeros range enumerate len normal list deepcopy predict_mmd min_val T zeros real_mmd state mean uniform linspace append max_val range len array mult_tank_system chirp flatten linspace inp_dim zeros range dot pinv sqrt vstack zeros sum range len insert delete dot pinv vstack normal flatten dot repmat eye zeros kron range len simulate_system mmd2 zeros std range len print state dot flatten zeros go_to_init deepcopy print flatten zeros range len high_obs inp_dim set_point_ctrl synthetic_example uniform mmd2 append create_sys_id_data sys_id range sys_id_loc get_test_statistic deepcopy caus_exp print reshape zeros diag len T multiply reshape repmat len fill_diagonal cdist reshape min rbf median float matrix inv solve_discrete_are T dot block pinv dlqr int reshape_pt1 fx ceil range T print reshape shape isscalar array | baumanndominik/identifying_causal_structure | 1,512 |
baumgach/PHiSeg-code | ['medical image segmentation', 'semantic segmentation'] | ['PHiSeg: Capturing Uncertainty in Medical Image Segmentation'] | phiseg_makegif_samples.py eval_ged_plot.py data/batch_provider.py phiseg_sample_construction.py tfwrapper/losses.py phiseg/experiments/phiseg_7_5_1annot.py tfwrapper/activations.py phiseg_train.py phiseg/model_zoo/likelihoods.py utils.py phiseg/experiments/probunet_1annot.py eval_ncc_plot.py data/data_switch.py phiseg/phiseg_model.py phiseg/model_zoo/posteriors.py tfwrapper/utils.py phiseg_generate_samples.py phiseg/experiments/phiseg_7_5.py phiseg/experiments/phiseg_7_1_1annot.py tfwrapper/layers.py data/lidc_data.py phiseg/experiments/phiseg_7_1.py phiseg/experiments/probunet.py phiseg/model_zoo/priors.py phiseg/experiments/detunet.py config/system.py tfwrapper/normalisation.py data/lidc_data_loader.py eval_dice_plot.py phiseg_test_predictions.py phiseg_test_quantitative.py main preproc_image generate_error_maps main softmax histogram_equalization findsubsets main main main main all_argmax Bunch find_floor_in_list normalise_image histogram_equalization convert_to_uint8 jaccard_onehot convert_to_uint8_rgb_fixed makefolder norm_l2 save_nii convert_to_onehot ncc load_nii map_image_to_intensity_range normalise_images generalised_energy_distance softmax convert_batch_to_onehot create_and_save_nii variance_ncc_dist map_images_to_intensity_range list_mean setup_GPU_environment resize_batch BatchProvider data_switch lidc_data crop_or_pad_slice_to_size find_subset_for_id prepare_data load_and_maybe_process_data phiseg prob_unet2D det_unet2D phiseg prob_unet2D dummy phiseg prob_unet2D dummy phiseg leaky_relu conv3D residual_unit2D maxpool2D crop_and_concat _add_summaries transposed_conv2D identity_residual_unit2D maxpool3D global_averagepool3D bilinear_upsample3D reshape_pool2D_layer nearest_neighbour_upsample2D conv2D bilinear_upsample2D dropout dense_layer dilated_conv2D averagepool2D transposed_conv3D global_averagepool2D pad_to_size cross_entropy_loss get_dice pixel_wise_cross_entropy_loss_weighted dice_loss batch_renorm batch_norm instance_norm2D identity group_norm2D layer_norm get_checkpoint_weights _bilinear_upsample_weights prepare_tensor_for_summary tfndims flatten get_weight_variable put_kernels_on_grid _upsample_filt get_latest_model_checkpoint_path get_bias_variable print_tensornames_in_checkpoint_file get_rhs_dim resize_image squeeze uint8 convert_to_uint8 mean zeros range pixel_wise_xent axis generate_error_maps data_loader predict_segmentation_sample argmax log data_switch str list transpose squeeze makefolder imshow savefig append data_identifier range asarray close load_weights convert_batch_to_onehot image_size join print reshape preproc_image phiseg nlabels figure createCLAHE COLOR_BGR2LAB apply merge COLOR_LAB2BGR cvtColor split latent_levels save VideoWriter histogram_equalization resize_image resize VideoWriter_fourcc release run RETR_TREE fromarray destroyAllWindows convert_to_uint8 COLOR_BGR2RGB getRotationMatrix2D shape warpAffine concatenate COLOR_GRAY2BGR findContours unique uint8 drawContours CHAIN_APPROX_SIMPLE ANTIALIAS write zfill inRange cvtColor predict_segmentation_sample_levels copy reversed softmax enumerate add_subplot flatten warning show iterate_batches dc predict mean info savez s_out_eval_sm generalised_energy_distance tile variance_ncc_dist std train experiment_name enumerate zeros uint8 astype range append convert_to_onehot range flatten mean std len flatten mean std len makedirs load Nifti1Image to_filename Nifti1Image eye save min astype float32 divide max clip mean std float32 copy percentile copy zeros map_image_to_intensity_range range shape zeros normalise_image range shape sum append dist_fct range pixel_wise_xent mean ncc append zeros range len join print gethostname warning info split shape zeros loads list transpose hash append getsize train_test_split create_group update asarray bytearray astype close unique items File find_subset_for_id create_dataset fsdecode join prepare_data makefolder info get conv2D get conv2D get get max_pool max_pool3d avg_pool name reduce_mean histogram name reduce_mean histogram as_list stack as_list stack as_list resize_images stack as_list stack flatten get_rhs_dim slice subtract append range len as_list subtract pad mod name histogram one_hot ndims multiply reduce_sum softmax argmax get softmax_cross_entropy_with_logits constant reshape multiply reduce_sum reduce_mean array len get_or_create_global_step scalar parametrize_variable get_rhs_dim as_list get reshape stack cast get factorization uint8 constant reshape transpose reduce_max pad stack cast reduce_min print sorted NewCheckpointReader get_variable_to_shape_map NewCheckpointReader int join glob append max get _bilinear_upsample_weights variance_scaling_initializer constant truncated_normal Variable add_to_collection info xavier_initializer get_variable constant info zeros range _upsample_filt | # PHiSeg Code Public tensorflow implementation for our paper [PHiSeg: Capturing Uncertainty in Medical Image Segmentation](https://arxiv.org/abs/1906.04045) method, which was accepted for presentation at [MICCAI 2019](https://www.miccai2019.org/). If you find this code helpful in your research please cite the following paper: ``` @article{PHiSeg2019Baumgartner, author={Baumgartner, Christian F. and Tezcan, Kerem C. and Chaitanya, Krishna and H{\"o}tker, Andreas M. and Muehlematter, Urs J. and Schawkat, Khoschy and Becker, Anton S. and | 1,513 |
baumgach/acdc_segmenter | ['semantic segmentation'] | ['An Exploration of 2D and 3D Deep Learning Techniques for Cardiac MR Image Segmentation'] | acdc_data.py experiments/unet2D_add_bn_wxent.py experiments/unet2D_bn_wxent.py tfwrapper/losses.py evaluate_patients.py train.py experiments/unet2D_bn_modified_wxent.py image_utils.py background_generator.py experiments/unet2D_bn_modified_dice.py utils.py experiments/unet2D_bn_modified_xent.py tfwrapper/utils.py experiments/FCN8_bn_wxent.py metrics_acdc.py tfwrapper/layers.py experiments/unet3D_bn_modified_wxent.py config/system.py model.py test_loop.py model_zoo.py crop_or_pad_slice_to_size prepare_data load_and_maybe_process_data _release_tmp_memory _write_range_to_hdf5 BackgroundGenerator score_data convert_to_uint8 reshape_2Dimage_to_tensor normalise_images normalise_image keep_largest_connected_components natural_order boxplot_metrics print_latex_tables compute_metrics_on_directories_raw main print_stats conv_int prepare_tensor_for_summary training_step evaluation inference loss predict unet2D_bn_padding_same unet2D_bn unet2D_padding_same_shallow unet3D_bn_modified VGG16_FCN_8_bn unet2D_bn_padding_same_modified VGG16_FCN_8 unet2D_bn_modified unet2D_padding_same unet3D_bn unet2D_bn_padding_same_shallow main do_eval iterate_minibatches run_training augmentation_function main get_latest_model_checkpoint_path load_nii makefolder save_nii setup_GPU_environment _bilinear_upsample_weights deconv2D_layer_bn conv3D_layer_bn conv2D_layer_bn _add_summaries get_weight_variable deconv3D_layer conv2D_dilated_layer_bn _upsample_filt deconv2D_layer crop_and_concat_layer dense_layer dropout_layer max_pool_layer3d dense_layer_bn get_bias_variable conv3D_layer conv2D_dilated_layer deconv3D_layer_bn batch_normalisation_layer max_pool_layer2d conv2D_layer pad_to_size pixel_wise_cross_entropy_loss_weighted dice_loss per_structure_dice pixel_wise_cross_entropy_loss flatten get_rhs_dim shape zeros lstrip rescale normalise_image open crop_or_pad_slice_to_size list squeeze split append range glob close copy load_nii zip info listdir load join int isdir File create_dataset zeros _release_tmp_memory _write_range_to_hdf5 len asarray info clear collect join prepare_data makefolder warning info list float32 placeholder nlabels Saver global_variables_initializer image_size predict min astype float32 divide max mean std float32 copy divide mean shape zeros std range shape label argmax zeros regionprops isinstance hd join sorted basename glob DataFrame dc assd load_nii get_zooms warning zip append sum prod join join subplots set_figwidth close set_figheight savefig boxplot join boxplot_metrics print_latex_tables compute_metrics_on_directories_raw mean info print_stats makedirs dice_loss one_hot pixel_wise_cross_entropy_loss_weighted add pixel_wise_cross_entropy_loss model_handle arg_max softmax get UPDATE_OPS get_collection optimizer_handle prepare_tensor_for_summary one_hot reduce_mean image arg_max softmax loss per_structure_dice reshape stack cast conv2D_layer max_pool_layer2d deconv2D_layer add deconv2D_layer_bn max_pool_layer2d conv2D_layer_bn add deconv2D_layer_bn max_pool_layer2d conv2D_layer_bn concat deconv2D_layer_bn max_pool_layer2d conv2D_layer_bn concat conv2D_layer max_pool_layer2d deconv2D_layer concat conv2D_layer max_pool_layer2d deconv2D_layer concat deconv2D_layer_bn max_pool_layer2d conv2D_layer_bn concat deconv2D_layer_bn crop_and_concat_layer conv2D_layer_bn pad max_pool_layer2d deconv2D_layer_bn crop_and_concat_layer conv2D_layer_bn pad max_pool_layer2d crop_and_concat_layer conv3D_layer_bn deconv3D_layer_bn pad max_pool_layer3d crop_and_concat_layer conv3D_layer_bn deconv3D_layer_bn pad max_pool_layer3d list close float32 placeholder Saver load_and_maybe_process_data global_variables_initializer image_size predict int dtype hasattr use_data_fraction close train_on_all_data shape warning load_and_maybe_process_data info get_latest_model_checkpoint_path experiment_name float info iterate_minibatches BackgroundGenerator run get asarray squeeze randint rotate_image random_integers shape uniform append resize_image fliplr range list arange sort reshape shuffle augmentation_function range image_size __file__ run_training MakeDirs copy makedirs load Nifti1Image to_filename int join glob append max print gethostname warning info max_pool max_pool3d as_list subtract append range len as_list subtract pad mod cond batch_norm as_list stack as_list stack flatten get_rhs_dim conv2D_layer activation batch_normalisation_layer batch_normalisation_layer conv3D_layer activation batch_normalisation_layer deconv2D_layer activation batch_normalisation_layer activation deconv3D_layer conv2D_dilated_layer batch_normalisation_layer activation batch_normalisation_layer dense_layer activation get _bilinear_upsample_weights variance_scaling_initializer constant truncated_normal Variable add_to_collection xavier_initializer get_variable constant zeros range _upsample_filt name histogram one_hot ndims multiply reduce_sum softmax argmax reduce_mean per_structure_dice softmax_cross_entropy_with_logits reduce_mean constant softmax_cross_entropy_with_logits_v2 reshape multiply reduce_sum reduce_mean array len get_rhs_dim as_list | This repository contains code to train state-of-the-art cardiac segmentation networks as described in this paper: [An Exploration of 2D and 3D Deep Learning Techniques for Cardiac MR Image Segmentation](https://arxiv.org/abs/1709.04496). The modified U-Net architecture achieved the **3rd overall rank** at the MICCAI 2017 [ACDC Cardiac segmentation challenge](https://www.creatis.insa-lyon.fr/Challenge/acdc/index.html). Authors: - Christian F. Baumgartner ([email](mailto:[email protected])) - Lisa M. Koch ([email](mailto:[email protected])) If you find this code helpful in your research please cite the following paper: ``` @article{baumgartner2017exploration, | 1,514 |
bbbbbbzhou/DuDoRNet | ['mri reconstruction'] | ['DuDoRNet: Learning a Dual-Domain Recurrent Network for Fast MRI Reconstruction with Deep T1 Prior'] | train.py utils/__init__.py networks/__init__.py utils/argparser.py datasets/utilizes.py models/du_recurrent_model.py test.py models/__init__.py datasets/__init__.py utils/misc.py datasets/cartesian_dataset.py models/utils.py networks/networks.py MRIDataset_Cartesian apply_mask norm_img MaskFunc_Cartesian get_datasets RecurrentModel DataConsistencyInKspace_I roll ifftshift get_nonlinearity Swish psnr get_scheduler ifft2 to_tensor fftshift fft2 mse data_consistency DataConsistencyInKspace_K to_spectral_img get_recon_loss complex_abs complex_abs_eval AverageMeter create_model SEDRDB DRDN SEDRDB_Conv gaussian_weights_init SELayer get_generator set_gpu get_nondefaults merge_args update_from_yaml save_args convert_dict2args get_last_checkpoint read_dir convert_hu2coefficient convert_coefficient2hu compute_metrics get_aapm_minmax display_transform arange next_power_of_2 get_config save_gradient prepare_sub_folder tensor_to_np design_filter np_to_pil shape mask_func array MRIDataset_Cartesian LambdaLR StepLR L1Loss MSELoss item item stack iscomplexobj fft fftshift ifftshift ifft fftshift ifftshift isinstance size zip narrow tuple dim range isinstance tuple dim range isinstance complex_abs size zeros ifft2 range RecurrentModel normal_ weight __name__ to DataParallel use_prior format print DRDN sum print dump __dict__ items list __dict__ parse_args convert_dict2args str join format list items map append items list get_default __dict__ sorted read_dir append compare_ssim max compare_psnr join max endswith loadmat min tqdm append float listdir array print join format makedirs uint8 imwrite numpy transpose uint8 astype Compose arange next_power_of_2 concatenate pi real zeros max append list tolist | # DuDoRNet: Learning a Dual-Domain Recurrent Network for Fast MRI Reconstruction with Deep T1 Prior Bo Zhou, S. Kevin Zhou IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020 [[Paper](http://openaccess.thecvf.com/content_CVPR_2020/papers/Zhou_DuDoRNet_Learning_a_Dual-Domain_Recurrent_Network_for_Fast_MRI_Reconstruction_CVPR_2020_paper.pdf)] This repository contains the PyTorch implementation of DuDoRNet. ### Citation If you use this code for your research or project, please cite: @inproceedings{zhou2020dudornet, title={DuDoRNet: Learning a Dual-Domain Recurrent Network for Fast MRI Reconstruction with Deep T1 Prior}, author={Zhou, Bo and Zhou, S Kevin}, | 1,515 |
bbenligiray/greedy-face-features | ['facial expression recognition'] | ['Greedy Search for Descriptive Spatial Face Features'] | feature_selector.py data_handler.py metadata.py main.py feature_extractor.py segment_dataset shuffle_arrays convert_dataset_list_to_numpy_array fold_generator itemize_dataset extract_features use_features_for_classification sequential_forward_selection evaluate_final_feature_selection test_features main write_features load_features ndarray range len argmax int list shuffle_arrays choice argsort round zeros sum max range len list shuffle_arrays convert_dataset_list_to_numpy_array array range len permutation len join readline int sort close append listdir open join predictor imsave itemize_dataset shape_predictor mkdir append zeros imread range detector circle get_frontal_face_detector print copy test_features zeros range normalize NuSVC predict fit use_features_for_classification astype confusion_matrix mean diagonal sum sum emotion_labels concatenate use_features_for_classification print astype confusion_matrix fold_generator next mean trace diagonal append zeros no_classes range no_subjects extractall evaluate_final_feature_selection write_features open segment_dataset input extract_features urlretrieve close lower eval sequential_forward_selection load_features ZipFile read remove print write rmtree zeros convert_dataset_list_to_numpy_array | ## Greedy Search for Descriptive Spatial Face Features My implementation of the following paper: [Gacav, C.; Benligiray, B.; Topal, C., "Greedy search for descriptive spatial face features," International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017.](https://arxiv.org/abs/1701.01879) The original paper was implemented by Caner Gacav in C++. I reproduced the results in Python without referring to the original code. The original code used dlib for SVM, this one uses scikit-learn. The results seem to be about 2-4% higher than what we have reported in the paper, probably because of the difference in SVM parameters/implementation. #### What is this? * Spatial features are derived from displacements of facial landmarks. They are a kind of geometric feature that can be used for facial expression recognition. * A large number of spatial features can be extracted from a face, but they are not all equally descriptive. | 1,516 |
bborja/wasr_network | ['autonomous navigation', 'semantic segmentation'] | ['A water-obstacle separation and refinement network for unmanned surface vehicles'] | kaffe/tensorflow/transformer.py wasr_models/wasr_imu_model_fu2.py wasr_train_imu.py wasr_models/image_reader.py wasr_models/utils.py wasr_inference_imu.py kaffe/caffe/__init__.py kaffe/caffe/caffepb.py wasr_inference_noimu.py wasr_models/__init__.py kaffe/layers.py kaffe/__init__.py kaffe/tensorflow/__init__.py wasr_train_noimu.py wasr_inference_noimu_general.py kaffe/errors.py kaffe/shapes.py kaffe/caffe/resolver.py kaffe/graph.py kaffe/tensorflow/network.py kaffe/transformers.py wasr_models/wasr_noimu_model.py load main get_arguments load main get_arguments load main get_arguments get_tensors_in_checkpoint_file load cost_function_separate_water_obstacle build_tensors_in_checkpoint_file save main focal_loss_cost get_arguments get_tensors_in_checkpoint_file load cost_function_separate_water_obstacle build_tensors_in_checkpoint_file save main focal_loss_cost get_arguments print_stderr KaffeError Graph Node GraphBuilder NodeMapper NodeKind NodeDispatchError NodeDispatch LayerAdapter shape_data shape_not_implemented get_filter_output_shape shape_concat shape_convolution shape_inner_product shape_scalar shape_pool get_strided_kernel_output_shape shape_mem_data shape_identity SubNodeFuser BatchNormScaleBiasFuser DataReshaper BatchNormPreprocessor ReLUFuser DataInjector ParameterNamer NodeRenamer ReductionParameter HingeLossParameter BlobProto BlobProtoVector NetStateRule LayerParameter PowerParameter FillerParameter ArgMaxParameter V0LayerParameter InnerProductParameter ConvolutionParameter SolverState EltwiseParameter LossParameter SliceParameter BatchNormParameter WindowDataParameter DummyDataParameter HDF5OutputParameter TanHParameter TransformationParameter SoftmaxParameter ConcatParameter DataParameter SPPParameter ParamSpec EmbedParameter SolverParameter InputParameter MVNParameter ContrastiveLossParameter NetState NetParameter BiasParameter CropParameter DropoutParameter PoolingParameter Datum SigmoidParameter BlobShape ExpParameter AccuracyParameter LogParameter ThresholdParameter TileParameter MemoryDataParameter LRNParameter ReLUParameter ImageDataParameter ELUParameter ReshapeParameter InfogainLossParameter ScaleParameter V1LayerParameter HDF5DataParameter PReLUParameter FlattenParameter PythonParameter show_fallback_warning CaffeResolver has_pycaffe get_caffe_resolver layer Network MaybeActivated TensorFlowNode get_padding_type TensorFlowEmitter TensorFlowTransformer TensorFlowMapper read_images_from_disk ImageReader image_scaling read_labeled_image_list random_crop_and_pad_image_and_labels image_mirroring inv_preprocess decode_labels prepare_label wasr_IMU_FU2 wasr_NOIMU2 add_argument ArgumentParser print restore format imwrite concat decode_labels dataset_path Saver save_dir argmax Session run open global_variables resize_bilinear placeholder cast expand_dims model_weights sum imread get_arguments ConfigProto seq load join time print seq_txt makedirs global_variables_initializer split exists format img_path int sorted NewCheckpointReader get_tensor append get_variable_to_shape_map list get_tensor_by_name enumerate print join makedirs equal add softmax cast log get_shape resize_images ones_like divide reduce_sum where reduce_mean cast expand_dims equal squared_difference get_tensors_in_checkpoint_file where set_random_seed add_n RMSPropOptimizer save gather list restore num_classes squeeze less_equal map scalar_mul build_tensors_in_checkpoint_file restore_from range num_steps start_queue_runners group stack random_seed power constant learning_rate minimize reshape snapshot_dir cost_function_separate_water_obstacle request_stop Coordinator pow int32 prepare_label focal_loss_cost Print len sparse_softmax_cross_entropy_with_logits reduce_mean write pad_h stride_w pad_w kernel_h kernel_w float stride_h height hasattr get_filter_output_shape kernel_parameters output_shape parameters width output_shape parameters list parents axis output_shape output_shape CaffeResolver write ceil float height width to_float resize_images to_int32 squeeze multiply stack random_uniform resize_nearest_neighbor expand_dims less stack boolean_mask reverse pad_to_bounding_box random_crop print concat maximum shape cast set_shape append split open image_scaling read_file cast random_crop_and_pad_image_and_labels decode_png decode_jpeg image_mirroring load new shape zeros array range enumerate uint8 astype shape zeros range | # WaSR -- A Water Segmentation and Refinement Maritime Obstacle Detection Network https://arxiv.org/abs/2001.01921 (ICRA 2020)<br> https://youtube.com/watch?v=K78NZbtKYVM (ICRA 2020 Video Presentation)<br> https://ieeexplore.ieee.org/document/9477208 (IEEE TCYB 2021 - WaSR2 extension)<br> https://github.com/lojzezust/WaSR (Pytorch Reimplementation of WaSR network) Obstacle detection using semantic segmentation has become an established approach in autonomous vehicles. However, existing segmentation methods, primarily developed for ground vehicles, are inadequate in an aquatic environment as they produce many false positive (FP) detections in the presence of water reflections and wakes. We propose a novel deep encoder-decoder architecture, a water segmentation and refinement (WaSR) network, specifically designed for the marine environment to address these issues. A deep encoder based on ResNet101 with atrous convolutions enables the extraction of rich visual features, while a novel decoder gradually fuses them with inertial information from the inertial measurement unit (IMU). The inertial information greatly improves the segmentation accuracy of the water component in the presence of visual ambiguities, such as fog on the horizon. Furthermore, a novel loss function for semantic separation is proposed to enforce the separation of different semantic components to increase the robustness of the segmentation. We investigate different loss variants and observe a significant reduction in false positives and an increase in true positives (TP). Experimental results show that WaSR outperforms the current state-of-the-art by approximately 4% in F1-score on a challenging USV dataset. WaSR shows remarkable generalization capabilities and outperforms the state of the art by over 24% in F1 score on a strict domain generalization experiment. <a href="https://youtube.com/watch?v=K78NZbtKYVM"><img align="center" src="figures/wasr_main.jpg" width="100%"></a> Updates: * [August 2021] <a href="https://github.com/lojzezust/WaSR">Pytorch reimplementation made available</a> * [July 2021] Added ICRA2020 Presentation Video | 1,517 |
bcmi/CaGNet-Zero-Shot-Semantic-Segmentation | ['word embeddings', 'semantic segmentation'] | ['Context-aware Feature Generation for Zero-shot Semantic Segmentation'] | losses.py train.py networks.py trainer.py libs/datasets/__init__.py tools.py blocks.py resnet.py libs/datasets/cocostuff.py model.py init_loss DiscLoss DiscLossWGAN GANLoss ClsLoss DiscLossWGANGP DiscLossLS PerceptualLoss ContentLoss MMDLoss DeepLabV2_local Generator DeepLabV2_ResNet101_local Discriminator MSCC DeepLabV2_ResNet101_local_MSC _Bottleneck _ResBlock _ConvBatchNormReLU resize_target Step_Scheduler _fast_hist scores get_config scores_gzsl Const_Scheduler construct_gt_st logWritter get_split RandomImageSampler MeaninglessError get_embedding main parse_args Trainer get_scheduler poly_lr_scheduler weights_init get_params get_parent_class CocoStuff164k _CocoStuff LoaderZLS CocoStuff10k get_dataset L1Loss DiscLoss DiscLossWGAN ClsLoss DiscLossWGANGP MSELoss DiscLossLS PerceptualLoss ContentLoss MMDLoss list LongTensor view shape repeat long range int32 resize zeros numpy enumerate load join format normalize concatenate print exit tensor open load join genfromtxt list concatenate astype copy int32 RandomImageSampler enumerate reshape list range nanmean dict zip zeros sum diag zeros zip add_argument ArgumentParser config model init_model Const_Scheduler DataLoader DataParallel save device get_split logWritter Step_Scheduler list set_device len step iter append parse_args to get_lr next range get_embedding cat val format get_config copy resume_from schedule zip zeros join items experimentid print multigpus write now scores_gzsl dumps ngpu train numpy makedirs named_modules parameters isinstance Conv2d LambdaLR StepLR poly_lr_scheduler items list isinstance | # CaGNet: Context-aware Feature Generation for Zero-shot Semantic Segmentation
Code for our **ACM MM 2020** paper *"Context-aware Feature Generation for Zero-shot Semantic Segmentation"*.
Created by [Zhangxuan Gu](https://github.com/zhangxgu), [Siyuan Zhou](https://github.com/Siyuan-Zhou), [Li Niu\*](https://github.com/ustcnewly), Zihan Zhao, Liqing Zhang\*.
Paper Link: [[arXiv]](http://arxiv.org/abs/2008.06893)
## News
| 1,518 |
bdecost/pixelnet | ['edge detection', 'semantic segmentation'] | ['PixelNet: Representation of the pixels, by the pixels, and for the pixels'] | pixelnet/vgg.py pixelnet/pixelnet.py pixelnet/utils.py test/pixelnet_arch.py pixelnet/hypercolumn.py pixelnet/upsample.py sparse_upsample_bilinear get_values offset build_model random_foreground_indices sparse_upsample_output_shape flatten_pixels build_model unflatten_pixels dense_bn dense_selu sparse_upsample_bilinear get_values offset sparse_upsample_nearest sparse_upsample_output_shape stratified_pixel_samples random_intensity_shift random_crop smooth_labels augment random_pixel_samples fully_conv_model load_imagenet_weights stack cast int32 get_values offset float32 cast clip_by_value list ones size where choice shape stack append range input Lambda Input Lambda shape Lambda range output dense_bn dense_selu cast float32 random_intensity_shift flip_axis list zoom random rotate choice shape max range list choice shape zeros range to_categorical list ones placeholder shape range flat random_crop astype choice smooth_labels augment reshape float32 dstack any int32 randint array range to_categorical where list placeholder shape append range flat random_crop concatenate astype choice stack augment smooth_labels int reshape float32 any array join get_shape format VGG16 reshape weights get_layer load_weights get_weights set_weights print Model Input load_imagenet_weights | pixelnet -------- tensorflow implementation of the pixelnet CNN architecture ([arxiv:1702.06506](http://arxiv.org/abs/1702.06506)). See [github.com/bdecost/uhcs-segment](https://www.github.com/bdecost/uhcs-segment) for training scripts for ultrahigh carbon steel segmentation paper ([doi:10.1017/S1431927618015635](https://www.dx.doi.org/10.1017/S1431927618015635), [arxiv:1805.08693](https://www.arxiv.org/abs/1805.08693)) | 1,519 |
becauseofAI/lffd-pytorch | ['face detection'] | ['LFFD: A Light and Fast Face Detector for Edge Devices'] | ChasingTrainFramework_GeneralOneClassDetection/loss_layer_farm/cross_entropy_with_focal_loss_for_one_class_detection.py face_detection/net_farm/__init__.py head_detection/accuracy_evaluation/evaluation_on_brainwash.py pedestrian_detection/symbol_farm/symbol_30_320_20L_4scales_v1.py ChasingTrainFramework_GeneralOneClassDetection/image_augmentation/augmentor.py license_plate_detection/accuracy_evaluation/evaluation_on_CCPD.py face_detection/config_farm/configuration_10_560_25L_8scales_v1.py license_plate_detection/symbol_farm/symbol_64_512_16L_3scales_v1.py license_plate_detection/inference_speed_evaluation/inference_speed_eval.py face_detection/data_provider_farm/text_list_adapter.py head_detection/data_provider_farm/text_list_adapter.py head_detection/data_iterator_farm/multithread_dataiter_for_cross_entropy_v1.py pedestrian_detection/config_farm/configuration_30_320_20L_4scales_v1.py ChasingTrainFramework_GeneralOneClassDetection/loss_layer_farm/cross_entropy_with_hnm_for_one_class_detection.py license_plate_detection/data_provider_farm/text_list_adapter.py head_detection/config_farm/configuration_10_160_17L_4scales_v1.py license_plate_detection/data_iterator_farm/multithread_dataiter_for_cross_entropy_v1.py face_detection/inference_speed_evaluation/inference_speed_eval.py license_plate_detection/data_provider_farm/reformat_CCPD.py head_detection/inference_speed_evaluation/inference_speed_eval.py pedestrian_detection/metric_farm/metric_default.py face_detection/accuracy_evaluation/predict.py head_detection/symbol_farm/symbol_10_160_17L_4scales_v1.py ChasingTrainFramework_GeneralOneClassDetection/data_provider_base/base_provider.py ChasingTrainFramework_GeneralOneClassDetection/data_provider_base/base_data_adapter.py ChasingTrainFramework_GeneralOneClassDetection/logging_GOCD.py face_detection/deploy_tensorrt/predict_tensorrt.py license_plate_detection/data_provider_farm/pickle_provider.py pedestrian_detection/data_provider_farm/pickle_provider.py head_detection/data_provider_farm/reformat_brainwash.py pedestrian_detection/data_iterator_farm/multithread_dataiter_for_cross_entropy_v1.py ChasingTrainFramework_GeneralOneClassDetection/data_iterator_base/data_batch.py ChasingTrainFramework_GeneralOneClassDetection/loss_layer_farm/loss.py face_detection/data_iterator_farm/multithread_dataiter_for_cross_entropy_v1.py ChasingTrainFramework_GeneralOneClassDetection/data_provider_base/text_list_adapter.py face_detection/data_iterator_farm/multithread_dataiter_for_cross_entropy_v2.py ChasingTrainFramework_GeneralOneClassDetection/data_provider_base/pickle_provider.py license_plate_detection/metric_farm/metric_default.py pedestrian_detection/accuracy_evaluation/predict.py ChasingTrainFramework_GeneralOneClassDetection/inference_speed_eval/inference_speed_eval_with_tensorrt_cudnn.py ChasingTrainFramework_GeneralOneClassDetection/train_GOCD.py face_detection/config_farm/__init__.py face_detection/demo/demo.py ChasingTrainFramework_GeneralOneClassDetection/loss_layer_farm/mean_squared_error_with_hnm_for_one_class_detection.py face_detection/net_farm/naivenet.py head_detection/data_provider_farm/pickle_provider.py head_detection/metric_farm/metric_default.py license_plate_detection/accuracy_evaluation/predict.py face_detection/metric_farm/metric_default.py face_detection/accuracy_evaluation/evaluation_on_fddb.py ChasingTrainFramework_GeneralOneClassDetection/solver_GOCD.py pedestrian_detection/data_provider_farm/text_list_adapter.py pedestrian_detection/inference_speed_evaluation/inference_speed_eval.py ChasingTrainFramework_GeneralOneClassDetection/loss_layer_farm/mean_squared_error_with_ohem_for_one_class_detection.py license_plate_detection/config_farm/configuration_64_512_16L_3scales_v1.py face_detection/data_provider_farm/pickle_provider.py ChasingTrainFramework_GeneralOneClassDetection/inference_speed_eval/inference_speed_eval_with_mxnet_cudnn.py head_detection/accuracy_evaluation/predict.py pedestrian_detection/data_provider_farm/reformat_caltech.py face_detection/config_farm/configuration_10_320_20L_5scales_v2.py face_detection/deploy_tensorrt/to_onnx.py face_detection/accuracy_evaluation/evaluation_on_widerface.py temp_test init_logging Solver start_train DataBatch DataAdapterBaseclass ProviderBaseclass read_file write_file PickleProvider TextListAdapter Augmentor InferenceSpeedEval InferenceSpeedEval HostDeviceMem focal_loss_for_twoclass focal_loss_for_twoclass_Prop cross_entropy_with_hnm_for_one_class_detection_Prop cross_entropy_with_hnm_for_one_class_detection cross_entropy_with_hnm_for_one_class_detection2 cross_entropy_with_hnm_for_one_class_detection mean_squared_error_with_hnm_for_one_class_detection mean_squared_error_with_hnm_for_one_class_detection_Prop mean_squared_error_with_ohem_for_one_class_detection_Prop mean_squared_error_with_ohem_for_one_class_detection DataBatch run_prediction_folder Predict NMS run run Multithread_DataIter_for_CrossEntropy Multithread_DataIter_for_CrossEntropy read_file write_file PickleProvider TextListAdapter main parse_args run_prediction_folder Inference_TensorRT HostDeviceMem NMS generate_onnx_file Metric naivenet25 conv1x1 naivenet20 conv3x3 get_naivenet BranchNet Resv2Block NaiveNet Resv1Block generate_gt_files generate_predicted_files DataBatch Predict run_prediction_pickle NMS run Multithread_DataIter_for_CrossEntropy read_file write_file PickleProvider dataset_statistics generate_data_list show_image TextListAdapter Metric run_get_net_symbol_for_train loss_branch get_net_symbol generate_gt_files generate_predicted_files Predict run_prediction_pickle DataBatch NMS run_prediction_folder run Multithread_DataIter_for_CrossEntropy read_file write_file PickleProvider annotation_from_name dataset_statistics generate_data_list show_image TextListAdapter Metric run_get_net_symbol_for_train loss_branch get_net_symbol Predict run_prediction_pickle DataBatch NMS run_prediction_folder run Multithread_DataIter_for_CrossEntropy read_file write_file PickleProvider dataset_statistics generate_data_list show_image TextListAdapter Metric run_get_net_symbol_for_train loss_branch get_net_symbol setFormatter addHandler print makedirs exit StreamHandler Formatter dirname setLevel FileHandler init_logging str list items __version__ info Solver fit write PickleProvider TextListAdapter PickleProvider ndarray isinstance print read_by_index positive_index shuffle waitKey imshow rectangle negative_index range enumerate minimum concatenate astype float32 maximum delete argsort append len join Predict waitKey imshow rectangle resize append imread max predict PickleProvider start_train naivenet20 set_device SGD cross_entropy_with_hnm_for_one_class_detection MultiStepLR parameters DataIter Metric init_logging info append cuda get_net_symbol Xavier add_argument ArgumentParser data VideoCapture imwrite FONT_HERSHEY_SIMPLEX VideoWriter VideoWriter_fourcc release destroyAllWindows waitKey shape imshow parse_args imread predict use_gpu replace astype join read Predict time uint8 print putText write rectangle cpu gpu do_inference Inference_TensorRT load update basicConfig list items load_model graph export_model float32 dict check_graph split NaiveNet int join str replace print strip makedirs write close split findall float append open join Predict replace print strip makedirs write close IMREAD_COLOR imread predict open Predict PickleProvider print read_by_index positive_index waitKey imshow rectangle negative_index predict join int str print strip makedirs len write close dirname split findall float append open int readlines close shuffle waitKey imshow rectangle open imread range append split int sorted print readlines close min open max range split Convolution Custom slice_axis softmax LinearRegressionOutput Activation Variable Convolution Group loss_branch Activation list_outputs list_arguments print get_net_symbol list_auxiliary_states infer_shape print_summary basename range split shuffle imwrite int split annotation_from_name shuffle str imwrite floor ceil walk enumerate | # A Light and Fast Face Detector for Edge Devices **This repo is updated frequently, keeping up with the latest code is highly recommended.** ## Recent Update * `2019.10.14` The official PyTorch version of LFFD is first online. Now the repo is only preview version. Face detection code for v2 version is released nightly. * `2019.10.16` Now the face detection code for v2 version can train normally. The code of other tasks will be updated soon. ## Introduction This repo is the official PyTorch source code of paper "[LFFD: A Light and Fast Face Detector for Edge Devices](https://arxiv.org/abs/1904.10633)". Our paper presents a light and fast face detector (**LFFD**) for edge devices. LFFD considerably balances both accuracy and latency, resulting in small model size, fast inference speed while achieving excellent accuracy. **Understanding the essence of receptive field makes detection networks interpretable.** | 1,520 |
beer-asr/beer | ['acoustic unit discovery'] | ['Bayesian Subspace Hidden Markov Model for Acoustic Unit Discovery'] | beer/cli/subcommands/shmm/mksphoneloop.py recipes/aud/local/timit/timit-norm-trans.py recipes/timit_v2/utils/prepare-decode-graph.py recipes/timit_v2/utils/vae-hmm-train-with-alignments.py recipes/timit_v2/utils/features-extract-central-frame.py tests/test_mixture.py examples/plotting.py recipes/timit_v2/utils/vae-hmm-decode-parallel.py beer/models/parameters.py beer/models/hmm.py beer/models/vae-old.py recipes/timit_v2/utils/features-stats.py beer/nnet/neuralnetwork.py beer/models/marginalpldaset.py tests/test_bayesmodel.py recipes/aud/utils/prepare_phone_trans.py beer/dists/dirichlet.py tests/test_normal.py beer/cli/subcommands/hmm/mkaligraph.py beer/dists/gamma.py beer/cli/subcommands/hmm/posteriors.py sandbox/kaldi_io/read_kaldi_MFCC_scp.py beer/nnet/residual.py beer/models/normalset.py recipes/timit_v2/utils/hmm-create-graph-and-emissions.py beer/__init__.py tests/test_nnet.py recipes/aud/local/alffa/prepare_transcripts.py recipes/timit_v2/utils/lm-unigram-reestimate.py setup.py recipes/aud/utils/entropy_rate.py recipes/timit_v2/utils/nflow-create.py beer/cli/subcommands/hmm/decode.py recipes/timit_v2/utils/lm-unigram-create.py beer/cli/__init__.py recipes/timit_v2/utils/convert_states_to_phone.py beer/models/categorical.py beer/cli/subcommands/hmm/accumulate.py beer/features.py beer/inference/objectives.py beer/nnet/arnet.py sandbox/kaldi_io/convert_kaldi_MFCC_to_npz.py beer/dists/normal.py beer/cli/subcommands/hmm/__init__.py tests/test_priors.py beer/cli/subcommands/hshmm/init.py beer/cli/subcommands/features/archive.py docs/conf.py beer/inference/__init__.py beer/cli/subcommands/features/extract.py recipes/timit_v2/utils/dual-vae-hmm-align.py tests/test_hmm.py tests/basetest.py beer/cli/subcommands/shmm/__init__.py beer/models/gsm.py beer/models/gamma.py recipes/timit_v2/utils/dual-vae-hmm-set-decoding-graph.py recipes/timit_v2/utils/prepare_state_labels.py recipes/timit_v2/utils/hmm-train-with-alignments.py tests/test_arnet.py beer/models/normal.py recipes/timit_v2/utils/pdf2unit.py recipes/timit_v2/utils/hmm-decode-parallel.py recipes/timit_v2/utils/prepare-alignments.py beer/models/lds.py beer/models/__init__.py recipes/aud/utils/nmi.py recipes/aud/utils/prepare_lang_aud.py sandbox/kaldi_io/read_kaldi_MFCC_stream.py beer/utils.py beer/cli/subcommands/hshmm/mksphoneloop.py beer/cli/subcommands/hmm/mkdecodegraph.py recipes/timit_v2/utils/features-stack-parallel.py recipes/timit_v2/utils/hmm-pdf-id-mapping.py beer/cli/subcommands/features/__init__.py recipes/timit_v2/utils/compare-alignments.py beer/cli/subcommands/dataset/create.py beer/models/vae.py beer/inference/optimizers.py recipes/aud/utils/posteriors.py beer/cli/subcommands/hmm/train.py beer/models/linearreg.py recipes/aud/utils/maptrans.py beer/graph.py recipes/aud/utils/add_prefix.py recipes/timit_v2/utils/dual-vae-hmm-decode-parallel.py recipes/timit_v2/utils/hmm-create.py recipes/aud/local/google_lr/prepare_transcripts.py tests/test_vbi.py beer/cli/subcommands/hmm/mkphoneloopgraph.py beer/cli/subcommands/hshmm/train.py beer/models/categoricalset.py beer/models/mixtureset.py recipes/aud/utils/ali2trans.py recipes/timit_v2/utils/nnet-create.py recipes/timit_v2/utils/prepare-spk-labels.py beer/cli/subcommands/__init__.py recipes/timit_v2/utils/hmm-align.py recipes/aud/utils/score_boundaries2.py beer/vbi.py recipes/timit_v2/utils/dual-vae-hmm-create.py tests/test_subspacemodels.py tests/test_problayers.py beer/dists/__init__.py beer/cli/dataset.py tests/test_create_model.py recipes/timit_v2/local/timit-norm-trans.py beer/dists/isonormalgamma.py recipes/aud/utils/maxoverlap_mapping.py recipes/timit_v2/utils/hmm-set-decoding-graph.py recipes/aud/local/timit/timit_lang_prep.py beer/models/basemodel.py beer/nnet/__init__.py recipes/timit_v2/utils/create-decode-graph.py recipes/timit_v2/utils/vae-hmm-create.py tests/test_expfamilyprior.py recipes/timit_v2/utils/vae-hmm-align.py recipes/aud/local/timit/phone_ali.py beer/cli/subcommands/shmm/setprior.py recipes/timit_v2/utils/prepare_trans.py sandbox/plotting.py beer/nnet/problayers.py beer/cli/subcommands/hmm/phonelist.py beer/cli/subcommands/hshmm/__init__.py tests/test_vae.py beer/cli/subcommands/hmm/mkphoneloop.py beer/cli/subcommands/shmm/train.py recipes/timit_v2/utils/hmm-train.py recipes/timit_v2/utils/vae-hmm-set-decoding-graph.py beer/cli/subcommands/dataset/__init__.py beer/cli/subcommands/hmm/mkphones.py recipes/timit_v2/utils/gmm-create.py recipes/timit_v2/local/timit_lang_prep.py recipes/aud/utils/ter.py recipes/timit_v2/utils/mutual-information.py beer/cli/subcommands/shmm/init.py recipes/aud/utils/convert_align.py beer/models/lm.py recipes/timit_v2/utils/dual-vae-hmm-train-with-alignments.py beer/cli/subcommands/hmm/mkphoneloopbigram.py tests/test_utils.py beer/dists/normalwishart.py beer/dists/normalgamma.py beer/models/phoneloop.py beer/dists/basedist.py recipes/aud/utils/score_boundaries.py recipes/timit_v2/utils/features-extract-parallel.py recipes/timit_v2/utils/convert-ali-to-best-path.py beer/models/modelset.py beer/dists/normaldiag.py beer/cli/subcommands/hmm/update.py beer/models/mixture.py tests/test_features.py tests/run_tests.py recipes/timit_v2/utils/score.py hz2mel hz2bark short_term_mspec bark2hz fbank __triangle add_deltas create_fbank mel2hz Graph Arc _state_name _show_graph CompiledGraph State get_gpus sample_from_normals jacobians onehot approximate_hessian logsumexp NoGPUAvailable reserve_gpu symmetrize_matrix make_symposdef get_active_gpus evidence_lower_bound BayesianModelOptimizer collapsed_evidence_lower_bound BayesianModelCoordinateAscentOptimizer EvidenceLowerBoundInstance scale_acc_stats CollapsedEvidenceLowerBoundInstance add_acc_stats Dataset UtteranceIterator Utterance accumulate main setup main setup main setup compute_dct_bases main setup ShowDefaultsAction main setup main setup main state2phone setup main create_graph_from_seq setup get_last_emitting_state_pdf main get_first_emitting_state_pdf setup build_sb build_categorical_2g setup build_categorical build_sbhp main build_hsb main setup build_categorical_2g build_hsb main setup setup count_emitting_state parse_topology main create_pdfs create_unit_graph main setup main state2phone setup main setup main setup main setup main iterate_units setup init_weights_stats iterate_units setup param_stats main init_means_precisions_stats main iterate_units setup main setup main iterate_units setup init_weights_stats iterate_units setup param_stats main init_means_precisions_stats main iterate_units setup main iterate_units setup main setup DistributionTypeMismatch entropy ParametersView kl_div MissingParameterAttribute _check_params_have_attr UndefinedStdParametersClass ConjugateLikelihood ExponentialFamily SupportDimensionMismatch UndefinedParameters DirichletStdParams Dirichlet CategoricalLikelihood Gamma GammaLikelihood GammaStdParams IsotropicNormalGamma IsotropicNormalGammaStdParams IsotropicNormalLikelihood NormalFullCovarianceStdParams NormalFullCovariance NormalDiagonalCovarianceStdParams NormalDiagonalCovariance NormalFixedDiagonalCovarianceLikelihood NormalGamma NormalGammaStdParams NormalDiagonalLikelihood _batch_trace NormalWishartStdParams NormalLikelihood NormalWishart EvidenceLowerBoundInstance scale_acc_stats evidence_lower_bound add_acc_stats VBOptimizer VBConjugateOptimizer Model DiscreteLatentModel _default_concentration_param _default_sb_param Categorical SBCategorical _default_param SBCategoricalHyperPrior _optimize_root_sb SBCategoricalSet DirichletLogParams _default_set_sb_param _lower_bound CategoricalSet _default_param Gamma _MeanLogDiagLL _pdfvecs AffineTransform HierarchicalGSM LinearTransform GSMSetMeansParameter _rvecs_from_samples HierarchicalAffineTransform _entropy _pdfvecs_from_rvecs SubspaceBayesianParameterView NoSubspaceBayesianParameters _MeanLogDiagCov _update_models GSMSet _subspace_params SubspaceBayesianParameter GSM _update_params _xentropy _svec_dim HMM compute_dct_bases LDSSet LinearRegression LinearRegressionSet UnigramLM MarginalPLDASet Mixture _default_param MixtureSet RepeatedModelSet ModelSet DynamicallyOrderedModelSet JointModelSet _default_diagcov_param _full_cov UnknownCovarianceType Normal _default_fullcov_param _default_isocov_param _default_diagcov_param _default_isocov_param _default_fullcov_param NormalSet ConjugateBayesianParameter BayesianParameter BigramPhoneLoop PhoneLoop VAEGlobalMeanVariance DualVAEGlobalMeanVariance VAE MeanLogDiagCov VAE ARNetNormalDiagonalCovarianceLayer MaskedLinear create_final_mask SequentialMultipleInput create_mask AutoRegressiveNetwork create IdentityLayer NeuralNetworkBlock TransposeLayer MergeTransform parse_nnet_element create_nnet_element load_value create_nnet_block ReshapeLayer ProbabilisticLayer NormalIdentityCovarianceLayer InverseAutoRegressiveFlow BernoulliLayer NormalDiagonalCovarianceLayer NormalIsotropicCovarianceLayer ResidualFeedForwardNet ResidualFeedFowardBlock create_upper_semicircle create_lower_semicircle plot_gmm plot_normal plot_covariance plot_circle plot_shaded_area plot_hmm main main _clean main read_timit_labels run main main run run load_transcript main load_transcript map_trans main load_mapping load_transcript main main compute_posts main main main get_hits load_transcript map_trans load_mapping main boundaries get_hits load_transcript map_trans load_mapping main boundaries load_transcript wer map_trans load_mapping main run main main main load_pdf_mapping main main read_phonelist convert_state_to_phone main main main main main main load_batch run get_cf compute_dct_bases main stack_features run accumulate main main create_emissions parse_topology create_lds_emissions main create_gmm_emissions create_unit_graph main main main main main load_batch main main main main main main read_mapping main convert_state_to_phone main create_graph_from_seq main read_mapping main convert_state_to_phone main read_phonelist main read_phonelist read_phone_map main filter_text read_text main main main main main load_batch create_upper_semicircle create_lower_semicircle plot_gmm plot_normal plot_covariance plot_circle plot_shaded_area plot_hmm main readScp readStream BaseTest run TestUtils TestARNetwork TestMaskedLinearTransform TestBayesianModel create_normalfull create_jointnormalwishart create_gamma TestBayesianParameterSet create_dirichlet create_normalgamma create_matrixnormal create_normaliso create_jointnormalgamma create_normalwishart TestBayesianParameter TestCreateModel TestJointNormalWishartPrior normalwishart_split_np isotropic_normalgamma_grad_log_norm TestNormalIsotropicCovariancePrior gamma_grad_log_norm normalwishart_log_norm jointnormalwishart_log_norm wishart_log_norm wishart_std_params TestJointExpFamilyPrior isotropic_normalgamma_log_norm gamma_log_norm normalgamma_log_norm matrixnormal_fc_log_norm jointnormalgamma_log_norm normal_fc_log_norm normal_fc_grad_log_norm normalgamma_grad_log_norm normalwishart_grad_log_norm TestNormalGammaPrior matrixnormal_fc_grad_log_norm TestGammaPrior joint_iso_normalgamma_log_norm TestDirichletPrior TestNormalFullCovariancePrior TestMatrixNormalPrior normal_iso_log_norm wishart_grad_log_norm TestWishartPrior TestIsotropicNormalGammaPrior jointnormalwishart_split_np normal_fc_split_np normal_iso_split_np TestJointNormalGammaPrior matrixnormal_fc_split_np dirichlet_grad_log_norm dirichlet_log_norm TestJointIsotropicNormalGammaPrior jointnormalgamma_split_nparams joint_iso_normalgamma_split_nparams normal_iso_grad_log_norm TestNormalWishartPrior TestFbank TestAlignModelSet TestForwardBackwardViterbi create_modelset_full viterbi create_modelset_diag backward create_ali_trans_mat TestCreateTransMatrix TestHMM create_trans_mat forward TestMixture TestNeuralNetwork TestNormalDiagonalCovariance TestNormalFullCovarianceSet TestNormalSetSharedFullCovariance TestNormalFullCovariance TestNormalDiagonalCovarianceSet TestNormalIsotropicCovariance TestNormalsotropicCovarianceSet TestNormalSetSharedDiagonalCovariance TestGammaPrior TestJointNormalWishartPrior TestDirichletPrior TestNormalFullCovariancePrior BaseTestPrior TestJointNormalGammaPrior TestJointIsotropicNormalGammaPrior TestNormalWishartPrior TestNormalGammaPrior TestWishartPrior TestIsotropicNormalGammaPrior TestProbabilisticLayers TestPPCA TestKLDivStdNormal TestPLDASet TestUtilityFunctions TestVAE TestEvidenceLowerbound logical_and astype linspace float len arange hz2scale scale2hz floor linspace zeros __triangle range dot append int itemsize copy mean log2 floor len int T itemsize as_strided log2 floor abs array create_fbank len str update states arcs end node _state_name edge Digraph start symbols stdout split StringIO run stdout StringIO run str list items get_gpus debug zeros get_active_gpus zeros long len list log_ size where max symeig tensor where symmetrize_matrix shape range enumerate view backward append sum difference intersection items list clear_cache sufficient_statistics accumulate expected_log_likelihood float sum len sufficient_statistics accumulate marginal_log_likelihood clear_cache load list zeros sum keys len add_argument debug accumulate abspath info features Dataset set_defaults add_subparsers add_parser arange cos pi zeros range stdin arange tuple short_term_mspec pi outdir save log create_fbank run exit sin sum update format sqrt stdout join read BytesIO T error compute_dct_bases add_deltas split load evidence_lower_bound alis append list values list state2phone print start_pdf utts per_frame start_state add_state Graph add_arc replace_state append normalize enumerate create_graph_from_seq array items get_first_emitting_state_pdf replace_state get_last_emitting_state_pdf normalize ones ones create create compile concentration len graph categorical modelset end_pdf start_state add_state start_end_group Graph add_arc end_state append add set add_state parse_topology Graph add_arc range len states create add_mutually_exclusive_group create_unit_graph create_pdfs JointModelSet natsorted keys zeros sum enumerate numpy posteriors init_step backward mean_field_factorization BayesianModelOptimizer epochs utterances step range enumerate sync strip optim_state VBConjugateOptimizer load_state_dict conjugate_bayesian_parameters state_dict range shared_transform lang same expected_pdfvecs update_models adapt_from init_from sum stats zeros_like sum zeros_like stats view in_dim out_dim HierarchicalGSM init_means_precisions_stats max init_weights_stats new_latent_posteriors means_precisions ones unit_latent_dim param_stats modelsets iterate_units bayesian_parameters latent_dim int deepcopy weights replace_parameters transform zeros clip_grad_norm_ zero_grad SGD cuda values open skip_language_posterior clip_grad Adam VBOptimizer param_groups reserve_gpu sploop_to_lang skip_unit_posterior skip_root_subspace parameters cpu gpu classes long dlatent_dim data var Parameter zeros_like randn diag min clone latent_prior mean params natural_parameters expected_sufficient_statistics log_norm natural_parameters expected_sufficient_statistics log_norm len from_std_parameters ones clone from_std_parameters ones_like clone from_std_parameters cumsum _log_v sum _log_prob dispatch stickbreaking root_sb_categorical backward DirichletLogParams clone zero_grad requires_grad_ parameters DirichletStdParams optim_cls _lower_bound append float step range stickbreaking cumsum ones reshape clone from_std_parameters mean repeat posterior len ones_like clone bayesian_parameters reshape sufficient_statistics_dim likelihood_fn pdfvectors_from_rvectors len shape reshape numel _subspace_params _update_params zip reshape sufficient_statistics shape expected_log_likelihood mean sufficient_statistics shape mean reshape transform shape reshape _subspace_params shape reshape cat _full_cov from_std_parameters inverse tensor len from_std_parameters tensor diag _full_cov tensor max from_std_parameters _full_cov randn repeat randn repeat len randn repeat len zeros enumerate zeros enumerate list ARNetNormalDiagonalCovarianceLayer MaskedLinear create_final_mask MergeTransform create_mask choices append range Linear join split hasattr function parse_nnet_element getattr append nn split get create_nnet_element Sequential Sequential append patch linspace pi create_upper_semicircle create_upper_semicircle transform plot_shaded_area create_lower_semicircle plot_circle range cholesky plot_covariance plot_normal modelset zip numpy detach n_states plot_normal modelset zip numpy isfile add_argument dirname output_dir ArgumentParser lexicon parse_args makedirs join stdin print add_argument add_mutually_exclusive_group map_60_48 ArgumentParser parse_args split sorted remove set dict strip defaultdict load_transcript log2 trans append items list mapping unk load_mapping map_trans ref hyp zip counts add old sufficient_statistics value _pc_llhs acoustic_scale non_speech_unit nunits append enumerate deepcopy abs argmin min str boundaries reshape min zeros range len examples choices decode float ali_graphs feats verbose hyp_alis DEBUG setLevel ref_alis groups partial files hyp_pdf_mapping load_pdf_mapping ref_pdf_mapping epsilon str asarray ali append range len hyp_phone hyp_states read_phonelist phone_map hmm_conf unigram_lm use_silence encoder encoder_problayer1 samples_and_llh prob_layer2 encoder_out_dim DualVAEGlobalMeanVariance prob_layer1 InverseAutoRegressiveFlow stats ConstantParameter symbols cat batch_size device modules_parameters mean_field_groups round view BayesianModelCoordinateAscentOptimizer to natural_backward use_gpu shuffle spk_ids feat_stats nnet_optim_state append int enumerate load savez fea out append ali array get_cf zeros range reshape len context join stack_features outdir save savez dim add_mutually_exclusive_group append append normalize create RepeatedModelSet create RepeatedModelSet create_gmm_emissions error create_lds_emissions exit create_emissions parse_topology states pdf_id tmpdir alignments vocsize from_numpy Sequential create_nnet_element AutoRegressiveNetwork pop convert_state_to_phone phone_level read_mapping map_pdf_to_phone out reshape phoneids phonefile out_npz phonelist split reference duplicate filter_text read_text read_phone_map hypothesis encoder_problayer VAEGlobalMeanVariance prob_layer load_batch expected_value map sub pop list asarray map sub append enumerate split addTest TestSuite tensor_type nruns get_testsuite __all__ getattr init_seed range int type item int type item int type item ger int type item ger int type item ger int type item int type item ger int type item type gammaln len psi sum log len len joint_iso_normalgamma_split_nparams sum gammaln reshape gammaln reshape psi log len jointnormalgamma_split_nparams reshape sum int sqrt len outer log normalwishart_split_np slogdet arange normalwishart_split_np inv outer psi trace sum slogdet int sqrt len jointnormalwishart_split_np sum log slogdet int sqrt len normal_fc_split_np inv slogdet normal_fc_split_np inv log normal_iso_split_np len normal_iso_split_np T inv matrixnormal_fc_split_np trace slogdet matrixnormal_fc_split_np inv int reshape inv sqrt len wishart_std_params arange slogdet sum wishart_std_params arange slogdet range inf log zeros_like list inf zeros_like logsumexp reversed range log T list zeros_like insert range reversed zeros float argmax log len type NormalGammaPrior NormalDiagonalCovarianceSet type NormalWishartPrior NormalFullCovarianceSet arange len zeros range enumerate ones arange diag | BEER: the Bayesian spEEch Recognizer ==================================== Beer is a toolkit that provide Bayesian machine learning tools for speech technologies. Beer is currently under construction and many things are subject to change ! Requirements ------------ Beer is built upon the [pytorch](http://pytorch.org) and several other third party packages. To make sure that all the dependencies are | 1,521 |
befelix/SafeMDP | ['safe exploration', 'gaussian processes'] | ['Safe Exploration in Finite Markov Decision Processes with Gaussian Processes'] | examples/mars/plot_utilities.py safemdp/__init__.py safemdp/SafeMDP_class.py doc/conf.py safemdp/grid_world.py safemdp/test.py examples/mars/mars.py examples/mars/mars_utilities.py setup.py safemdp/utilities.py examples/sample.py examples/mars/generate_plots.py read performance_metrics initialize_SafeMDP_object mars_map emulate_color plot_dist_from_C paper_figure plot_coverage format_figure plot_paper safe_subpath states_to_nodes compute_true_S_hat compute_true_safe_set shortest_path compute_S_hat0 nodes_to_states grid draw_gp_sample dynamics_vec_ind GridWorld grid_world_graph path_to_boolean_matrix reverse_action SafeMDP reachable_set returnable_set link_graph_and_safe_set ReachableSetTest MaxOutDegreeTest GridWorldGraphTest TestTrueSafeSet DifferenceKernelTest ReturnableSetTest DifferenceKernel max_out_degree tuple ReadAsArray flatten Open linspace max show colorbar imshow meshgrid UseExceptions urlretrieve spline_interpolator copy RectBivariateSpline T print min system GetRasterBand world_shape GridWorld altitudes compute_true_S_hat compute_true_safe_set reshape graph compute_S_hat0 h copy initial_nodes Gaussian add_observation set_XY GP Matern52 range count_nonzero S_hat logical_and float sum update subplots figure list set_label arange set_yticklabels set_yticks set_xlabel tight_layout set_ticks set_linewidth set_xticks set_ticks_position set_ylabel set_alpha set_tick_params values to_rgb show T copy colorbar paper_figure imshow savefig nan format_figure gca array show T squeeze size min colorbar sqrt imshow title figure zeros max show title figure plot zeros reshape copy all print logical_not choice dynamics_vec_ind zeros array range mod add_edges_from list arange DiGraph reshape zip prod reachable_set link_graph_and_safe_set returnable_set copy reverse int asanyarray astype asanyarray arange multivariate_normal grid eye K zeros astar_path out_edges DiGraph out_edges range zeros_like len range len edges_iter pop list edges_iter append zeros pop list get_edge_data edges_iter append zeros | # SafeMDP [](https://travis-ci.org/befelix/SafeMDP) [](http://safemdp.readthedocs.io/en/latest/?badge=latest) Code for safe exploration in Markov Decision Processes (MDPs). This code accompanies the paper M. Turchetta, F. Berkenkamp, A. Krause, "Safe Exploration in Finite Markov Decision Processes with Gaussian Processes", Proc. of the Conference on Neural Information Processing Systems (NIPS), 2016, <a href="http://arxiv.org/abs/1606.04753" target="_blank">[PDF]</a> # Installation The easiest way to install use the library is to install the <a href="https://www.continuum.io/downloads" target="_blank">Anaconda<a/> Python distribution. Then, run the following commands in the root directory of this repository: ``` pip install GPy python setup.py install | 1,522 |
behzadhsni/BReG-NeXt | ['facial expression recognition'] | ['BReG-NeXt: Facial Affect Computing Using Adaptive Residual Networks With Bounded Gradient'] | codes/torch/inspect_checkpoint.py codes/torch/BReGNeXt.py codes/torch/utils.py codes/torch/trainer.py codes/BReG-NeXt.py decode clip validation_inputs run_training BReG_NeXt inputs augment main normalize residual_block focal_loss2 BReGNeXtResidualLayer BRegNextResidualBlock BRegNextShortcutModifier BReGNeXt main parse_numpy_printoption print_tensors_in_checkpoint_file _count_total_params focal_loss2 decode_and_preprocess_image BReGNeXtPTLDriver ShuffleDataset ones_like zeros_like equal where global_avg_pool conv_2d fully_connected activation batch_normalization residual_block uint8 decode_raw one_hot reshape set_shape cast int32 parse_single_example cond random_uniform stack cast float32 float32 cast clip_by_value validation_inputs BReG_NeXt inputs float32 placeholder merge_all group local_variables_initializer Saver int32 global_variables_initializer scalar run_training print compile search get_variable_to_shape_map decode sorted NewCheckpointReader items _count_total_params print get_tensor get_variable_to_shape_map get_variable_to_dtype_map get_printoptions set_printoptions type split file_name all_tensor_names all_tensors print print_tensors_in_checkpoint_file exit tensor_name log_softmax exp reshape unsqueeze LongTensor _image_transform | # BReG-NeXt Implementation of the paper **BReG-NeXt: Facial Affect Computing Using Adaptive Residual Networks With Bounded Gradient** BReG-NeXt paper can be found on [IEEE Xplore](https://ieeexplore.ieee.org/document/9064942) and [arXiv](https://arxiv.org/abs/2004.08495)  # Requirements Tensorflow 1.14.0 is suggested to run the code. For installing the rest of the required packages, run the following command: ``` pip install -r requirements.txt | 1,523 |
bekkermans/style_transfer | ['style transfer'] | ['Style Transfer by Relaxed Optimal Transport and Self-Similarity'] | utils.py main.py model.py loss.py StyleLoss VggEncoder RGBtoYUV GaussianSmoothing save_tensor_to_image LaplacianPyramid GetRandomIndices load_image CreateSpatialTensor numpy_to_tensor transpose as_tensor COLOR_BGR2RGB shape resize imread cvtColor imwrite squeeze transpose min COLOR_RGB2BGR resize numpy max cvtColor | # Style Transfer by Relaxed Optimal Transport and Self-Similarity This is Pytorch implementation of the paper https://arxiv.org/abs/1904.12785 ### How to use ``` git clone [email protected]:bekkermans/style_transfer.git pip install -r requirements.txt python main.py {CONTENT IMAGE PATH} {STYLE IMAGE PATH} ``` ### Examples <img src="images/mone.jpg" width="200" height="200"> <img src="images/mone-content.jpg" width="200" height="200"> <img src="images/mone-style.jpg" width="200" height="200"> <br> | 1,524 |
bel2scm/bel2scm | ['counterfactual inference'] | ['Leveraging Structured Biological Knowledge for Counterfactual Inference: a Case Study of Viral Pathogenesis'] | src/bel2scm/gen_test_data.py src/bel2scm/neurips_bel2scm/node.py src/bel2scm/neurips_bel2scm/bel_graph.py src/bel2scm/neurips_bel2scm/scm.py src/bel2scm/generation/covid_scm_cf.py src/bel2scm/singlecell/parse.py src/bel2scm/neurips_bel2scm/config.py src/bel2scm/causal_graph.py tests/test_bel2scm.py src/bel2scm/neurips_bel2scm/utils.py tests/tests_plots_known_parameters_scm.py src/bel2scm/neurips_bel2scm/constants.py src/bel2scm/neurips_bel2scm/parent_interaction_types.py src/bel2scm/neurips_bel2scm/parameter_estimation.py src/bel2scm/generation/utils.py src/bel2scm/version.py src/bel2scm/graph_node.py setup.py tests/test_plots_bel2scm.py src/bel2scm/singlecell/__init__.py tests/__init__.py src/bel2scm/generation/covid_sigmoid_scm_cf.py cf_graph cg_graph str_graph bel_graph dep_vars data_gen indep_vars mle_node scm_node bayes_node cg_node get_version get_git_hash COVID_SCM percentage_in scm_covid_counterfactual InvalidNoiseType SigmoidSCM scm_covid_counterfactual run BelGraph Config get_variable_type_from_label Node NodeData SigmoidNet TrainNet ParameterEstimation ParentInteractionTypes SCM get_exogenous_distribution get_child_name_list get_parent_samples get_distribution all_parents_visited sample_with_and_interaction get_sample_for_binary_node get_sample_for_non_roots get_sample_for_continuous_node get_parent_tensor load_scm_object json_load save_scm_object get_celltype_barcodes_from_patient get_raw_barcodes_for_patient library_size_factors get_all_samples extract_celltype_from_patient get_metadata_barcodes_from_patient get_model_genes log_norm_counts b2a parse_mtx get_matrix_from_h5 main TestSCM TestSCM format Beta sample Exponential LogNormal append Uniform range format Bernoulli Beta sample append range len dep_vars Tensor indep_vars update_noise_svi model EmpiricalMarginal tolist infer update_noise_importance do COVID_SCM SigmoidSCM join to_csv SigmoidSCM DataFrame scm_covid_counterfactual items list list keys intersection OrderedDict list keys squeeze append view len list get_distribution keys extend shape mmread read_csv get_celltype_barcodes_from_patient columns join get_GEO basename get_metadata_attribute columns download_supplementary_files parse_mtx get_matrix_from_h5 print join name append get_celltype_barcodes_from_patient sum run manual_seed | # bel2scm <p> <a href="https://github.com/bel2scm/bel2scm/actions?query=workflow%3ATests"> <img alt="GitHub Actions" src="https://github.com/bel2scm/bel2scm/workflows/Tests/badge.svg" /> </a> <a href="https://pypi.org/project/bel2scm"> <img alt="PyPI" src="https://img.shields.io/pypi/v/bel2scm" /> </a> <a href="https://pypi.org/project/bel2scm"> <img alt="PyPI - Python Version" src="https://img.shields.io/pypi/pyversions/bel2scm" /> | 1,525 |
belaalb/CEVAE-VampPrior | ['causal inference'] | ['Causal Effect Inference with Deep Latent-Variable Models'] | train.py evaluation.py train_loop.py datasets.py model.py IHDP Evaluator encoder decoder fc_net TrainLoop | # CEVAE-VampPrior Pytorch (0.3.1) + Pyro (0.1.2) implementation of the Causal Effect Variational Autoencoder (CEVAE) with VampPrior - CEVAE: https://arxiv.org/pdf/1705.08821.pdf - VampPrior: https://arxiv.org/pdf/1705.07120.pdf IFT6269 project. Collaborators: Joao Monteiro, Yassine Yaakoubi, Abhishek Tiwari. | 1,526 |
bellsrik86/DeepFaceLab | ['face swapping'] | ['DeepFaceLab: Integrated, flexible and extensible face-swapping framework'] | models/Model_XSeg/Model.py mainscripts/VideoEd.py XSegEditor/QImageDB.py merger/MergerScreen/MergerScreen.py mainscripts/dev_misc.py merger/InteractiveMergerSubprocessor.py samplelib/SampleGeneratorImageTemporal.py core/leras/models/__init__.py samplelib/Sample.py core/joblib/__init__.py core/imagelib/estimate_sharpness.py core/joblib/MPFunc.py core/leras/layers/DenseNorm.py core/leras/nn.py samplelib/SampleProcessor.py core/leras/archis/__init__.py core/leras/layers/Conv2D.py core/stdex.py core/imagelib/text.py core/leras/models/PatchDiscriminator.py core/leras/__init__.py models/Model_XSeg/__init__.py core/leras/layers/DepthwiseConv2D.py core/imagelib/equalize_and_stack_square.py samplelib/__init__.py core/imagelib/sd/draw.py core/randomex.py core/joblib/MPClassFuncOnDemand.py core/leras/optimizers/__init__.py core/leras/initializers/__init__.py mainscripts/Extractor.py mainscripts/Sorter.py core/mathlib/__init__.py core/leras/optimizers/OptimizerBase.py core/imagelib/draw.py samplelib/SampleGeneratorBase.py core/leras/layers/BatchNorm2D.py core/leras/layers/ScaleAdd.py XSegEditor/QCursorDB.py models/__init__.py core/joblib/SubprocessGenerator.py DFLIMG/DFLIMG.py core/leras/layers/__init__.py models/ModelBase.py facelib/FaceEnhancer.py core/joblib/ThisThreadGenerator.py XSegEditor/QIconDB.py core/imagelib/blursharpen.py merger/MergeMasked.py core/cv2ex.py models/Model_Quick96/__init__.py main.py core/leras/archis/ArchiBase.py localization/localization.py facelib/LandmarksProcessor.py core/leras/layers/Dense.py models/Model_SAEHD/__init__.py mainscripts/Merger.py mainscripts/XSegUtil.py core/imagelib/filters.py XSegEditor/QStringDB.py core/leras/initializers/CA.py core/imagelib/__init__.py core/leras/models/ModelBase.py samplelib/SampleGeneratorFaceTemporal.py core/leras/layers/InstanceNorm2D.py facelib/S3FDExtractor.py samplelib/SampleLoader.py samplelib/SampleGeneratorFaceXSeg.py core/qtex/qtex.py DFLIMG/__init__.py core/leras/optimizers/RMSprop.py core/structex.py core/leras/layers/BlurPool.py facelib/__init__.py core/leras/archis/DeepFakeArchi.py core/imagelib/SegIEPolys.py mainscripts/FacesetEnhancer.py merger/FrameInfo.py models/Model_SAEHD/Model.py core/mplib/__init__.py samplelib/SampleGeneratorFacePerson.py merger/MergerScreen/__init__.py core/imagelib/common.py core/imagelib/sd/__init__.py core/leras/layers/AdaIN.py core/leras/layers/LayerBase.py facelib/FANExtractor.py core/leras/models/XSeg.py samplelib/SampleGeneratorFace.py merger/MergeAvatar.py core/pathex.py core/leras/layers/Conv2DTranspose.py core/qtex/QXMainWindow.py merger/__init__.py core/imagelib/morph.py core/joblib/SubprocessorBase.py mainscripts/Trainer.py samplelib/PackedFaceset.py core/mathlib/umeyama.py mainscripts/Util.py core/interact/__init__.py core/leras/layers/FRNorm2D.py core/leras/layers/Saveable.py core/leras/layers/TLU.py facelib/XSegNet.py core/mplib/MPSharedList.py core/leras/models/CodeDiscriminator.py facelib/FaceType.py XSegEditor/XSegEditor.py merger/MergerConfig.py core/leras/ops/__init__.py core/interact/interact.py core/leras/device.py core/imagelib/sd/calc.py core/imagelib/reduce_colors.py samplelib/SampleGeneratorImage.py localization/__init__.py DFLIMG/DFLJPG.py core/qtex/__init__.py core/qtex/QSubprocessor.py core/qtex/QXIconButton.py samplelib/SampleGeneratorFaceCelebAMaskHQ.py core/imagelib/warp.py core/osex.py core/imagelib/color_transfer.py models/Model_Quick96/Model.py process_dev_test process_merge process_videoed_cut_video process_train process_faceset_enhancer process_xsegeditor process_xsegapply process_xsegremove process_xsegremovelabels process_videoed_video_from_sequence process_xsegfetch process_util process_extract fixPathAction process_videoed_extract_video process_sort bad_args process_videoed_denoise_image_sequence cv2_imwrite cv2_resize cv2_imread set_process_dpi_aware get_screen_size set_process_lowest_prio get_image_paths move_all_files write_bytes_safe get_first_file_by_stem get_image_unique_filestem_paths get_all_dir_names get_file_paths delete_all_files scantree get_paths get_all_dir_names_startswith random_normal suppress_stdout_stderr struct_unpack LinearMotionBlur blursharpen _scale_array color_transfer color_transfer_idt color_transfer_mkl reinhard_color_transfer lab_image_stats linear_color_transfer channel_hist_match color_transfer_mix color_transfer_sot color_hist_match overlay_alpha_image cut_odd_image normalize_channels draw_polygon draw_rect equalize_and_stack_square compute _calculate_sharpness_metric marziliano_method get_block_contrast _simple_thinning estimate_sharpness is_edge_block sobel apply_random_motion_blur apply_random_rgb_levels apply_random_hsv_shift apply_random_bilinear_resize apply_random_gaussian_blur morphTriangle morph_by_points applyAffineTransform reduce_colors SegIEPolys SegIEPolyType SegIEPoly get_text_image draw_text_lines draw_text _get_pil_font get_draw_text_lines warp_by_params gen_warp_params dist_to_edges random_circle_faded circle_faded InteractBase InteractColab InteractDesktop MPClassFuncOnDemand MPFunc SubprocessGenerator Subprocessor ThisThreadGenerator Devices Device nn ArchiBase DeepFakeArchi CAInitializerSubprocessor initializers AdaIN BatchNorm2D BlurPool Conv2D Conv2DTranspose Dense DenseNorm DepthwiseConv2D FRNorm2D InstanceNorm2D LayerBase Saveable ScaleAdd TLU CodeDiscriminator ModelBase UNetPatchDiscriminator PatchDiscriminator XSeg dssim concat average_gv_list resize2d_bilinear flatten rgb_to_lab resize2d_nearest space_to_depth tf_gradients random_binomial style_loss gelu init_weights tf_get_value upsample2d reshape_4D batch_set_value max_pool average_tensor_list gaussian_blur depth_to_space OptimizerBase RMSprop umeyama get_power_of_two rotationMatrixToEulerAngles polygon_area ArrayFillerSubprocessor MPSharedList IndexHost Index2DHost ListHost DictHostCli DictHost QSubprocessor QDarkPalette QActionEx QSize_to_np QImage_from_np QImage_to_np QPixmap_from_np QPoint_to_np QPoint_from_np QXIconButton QXMainWindow DFLIMG DFLJPG FaceEnhancer FaceType FANExtractor blur_image_hull_mask mirror_landmarks get_face_struct_mask estimate_pitch_yaw_roll convert_98_to_68 expand_eyebrows get_rect_from_landmarks get_transform_mat draw_rect_landmarks get_cmask transform_points estimate_averaged_yaw calc_face_pitch alpha_to_color get_image_eye_mask draw_landmarks get_image_hull_mask S3FDExtractor XSegNet dev_test_68 dev_test1 dev_resave_pngs extract_vggface2_dataset extract_umd_csv dev_segmented_trash process_folder FacesetEnhancerSubprocessor extract_video video_from_sequence denoise_image_sequence cut_video remove_xseg remove_xseg_labels apply_xseg fetch_xseg FrameInfo InteractiveMergerSubprocessor MergeFaceAvatar process_frame_info MergeMasked MergeMaskedFace MergerConfigMasked MergerConfigFaceAvatar MergerConfig ScreenManager ScreenAssets Screen ModelBase PreviewHistoryWriter import_model QModel SAEHDModel XSegModel PackedFaceset Sample SampleType SampleGeneratorBase SampleGeneratorFace SampleGeneratorFaceCelebAMaskHQ MaskType SampleGeneratorFacePerson Index2DHost SampleGeneratorFaceTemporal SampleGeneratorFaceXSeg SegmentedSampleFilterSubprocessor SampleGeneratorImage SampleGeneratorImageTemporal FaceSamplesLoaderSubprocessor SampleLoader SampleProcessor QCursorDB QIconDB QImageDB QStringDB ImagePreviewSequenceBar QUIConfig QCanvasOperator LoaderQSubprocessor CanvasConfig OpMode QCanvas DragType ViewLock ColorScheme QCanvasControlsLeftBar start QCanvasControlsRightBar MainWindow PTEditMode main set_process_lowest_prio main set_process_lowest_prio unpack_faceset pack save_faceset_metadata log_info restore_faceset_metadata_folder pack_faceset save_faceset_metadata_folder restore_faceset_metadata Path input_dir unpack recover_original_aligned_filename set_process_lowest_prio add_landmarks_debug_images main set_process_lowest_prio main set_process_lowest_prio output_ext fps extract_video output_dir input_file set_process_lowest_prio audio_track_id from_time bitrate to_time cut_video input_file set_process_lowest_prio factor denoise_image_sequence set_process_lowest_prio input_dir video_from_sequence set_process_lowest_prio Path set_process_lowest_prio input_dir process_folder dev_test set_process_lowest_prio input_dir start Path set_process_lowest_prio input_dir model_dir apply_xseg Path input_dir set_process_lowest_prio Path remove_xseg set_process_lowest_prio input_dir remove_xseg_labels Path set_process_lowest_prio input_dir Path fetch_xseg set_process_lowest_prio input_dir print_help exit loader_func asarray bytearray imencode suffix shape normalize_channels resize nice SetPriorityClass HANDLE GetCurrentProcess SetProcessDPIAware user32 write_bytes parent name unlink rename exists is_dir scandir str list scandir any Path scantree exists append remove get_image_paths name stem set add verbose_print_func Path exists Path exists Path exists str list lower scandir Path startswith append exists str sorted list path lower scandir Path exists name Path rename get_file_paths unlink Path get_file_paths normal empty prod range calcsize warpAffine ones getRotationMatrix2D zeros sum medianBlur addWeighted ones zeros GaussianBlur max dtype reshape astype copy argsort shape bilateralFilter fill empty range eps T clip reshape eig dot shape sqrt cov mean diag T reshape min astype float32 empty_like solve dot shape histogram interp max range _scale_array uint8 astype float32 merge lab_image_stats COLOR_LAB2BGR cvtColor split T reshape transpose mean dot eigh eye cholesky split min max float64 astype shape unique interp ravel dtype astype shape channel_hist_match range uint8 astype float32 COLOR_BGR2LAB color_transfer_sot COLOR_LAB2BGR cvtColor uint8 color_transfer_idt color_transfer_mkl astype float32 reinhard_color_transfer linear_color_transfer color_transfer_sot clip shape repeat len shape shape range tuple line range len draw_polygon concatenate shape resize expand_dims max enumerate T convolve square mean sqrt array shape zeros float64 marziliano_method astype canny sobel gradient atan2 shape any zeros round range int exp slice get_block_contrast shape flipud round zeros is_edge_block rot90 range cvtColor COLOR_BGR2GRAY rand random clip array COLOR_HSV2BGR random merge COLOR_BGR2HSV randint clip cvtColor split LinearMotionBlur randint random randint GaussianBlur random int rand random shape resize float32 getAffineTransform float32 fillConvexPoly shape boundingRect int32 applyAffineTransform zeros expand_dims array shape morphTriangle zeros simplices fromarray uint8 convert astype COLOR_RGB2BGR array cvtColor truetype asarray Draw get_default_ttf_font_name concatenate text new _get_pil_font shape clip draw_text range len draw_text_lines zeros shape T random astype copy float32 getRotationMatrix2D dict uniform linspace random_normal warpAffine remap resize norm clip einsum concatenate norm reshape empty abs clip max random randint initializer inputs append batch_set_value run gradients expand_dims __enter__ __exit__ enumerate reduce_mean __enter__ __exit__ concat pow tanh sqrt pi as_list reshape tile transpose value resize transpose value resize transpose reshape transpose randint float32 pad make_kernel tile depthwise_conv2d gaussian_blur dtype constant arange reshape float32 square reduce_mean reducer cast softmax tile max as_list reshape transpose as_list reshape transpose constant reshape multiply matmul cast svd T ones matrix_rank mean dot eye sum diag sqrt atan2 shape Format_Grayscale8 Format_BGR888 Format_ARGB32 height reshape convertToFormat width constBits setsize range squeeze invertAffineTransform shape transform expand_dims get norm getAffineTransform polygon_area astype float32 transform_points sqrt estimate_averaged_yaw array transform_points FULL_NO_ALIGN get_transform_mat float32 array copy concatenate expand_eyebrows fillConvexPoly convexHull zeros int getStructuringElement astype fillConvexPoly MORPH_ELLIPSE convexHull dilate zeros GaussianBlur shape zeros concatenate process copy blend alpha_to_color zeros get_image_hull_mask gdf max clip int blur getStructuringElement min erode argwhere MORPH_ELLIPSE expand_dims copy draw_landmarks zeros expand_eyebrows concatenate polylines tuple shape get_image_hull_mask array circle get_transform_mat draw_rect transform_points draw_polygon draw_landmarks array array rotationMatrixToEulerAngles concatenate astype float32 pi solvePnP zeros array clip get pop get_image_paths parent log_info name stem progress_bar_generator get_all_dir_names Path mkdir run fromString split cv2_imread Path normalize_channels exists input_bool str log_info name stem append get_image_paths get_rect_from_landmarks unlink mkdir parent cv2_imwrite progress_bar_generator read_text split get str get_image_paths parent log_info name len unlink Path mkdir split log_err run range exists fromString input_bool get_image_paths progress_bar_generator get_all_dir_names Path x get_image_paths cv2_imwrite progress_bar_generator cv2_imread Path get_image_paths parent name stem rename Path mkdir append input_bool join get_image_paths log_info parent name copy unlink rmtree mkdir run update str get_image_paths parent input_str stem output get_first_file_by_stem unlink input_int mkdir Path log_err input run str suffix parent input_str stem overwrite_output input_int log_err Path input max run update str suffix parent progress_bar_generator output input_int rename log_err Path run clip enumerate suffix input_str wait input_int Path max input_bool str stem input update run_async get_image_paths close mkdir parent overwrite_output get_first_file_by_stem log_err probe load extract initialize get_image_paths log_info set_xseg_mask progress_bar_generator astype float32 get_resolution ask_choose_device shape XSegNet resize save load str get_image_paths log_info parent name has_polys progress_bar_generator copy get_seg_ie_polys mkdir load get_image_paths log_info set_xseg_mask input_str progress_bar_generator has_xseg_mask save load get_image_paths log_info input_str has_seg_ie_polys progress_bar_generator save set_seg_ie_polys warpAffine get_transform_mat astype float32 cv2_imread normalize_channels filename clip sharpen_func sharpen_mode concatenate predictor_func add_source_image process_frame_info temporal_face_count append range sharpen_amount predictor_func color_transfer_mkl motion_power bicubic_degrade_power motion_blur_power linear_color_transfer color_transfer_mix boundingRect resize reduce_colors max clip face_enhancer_func hist_match_threshold medianBlur super_resolution_power WARP_INVERSE_MAP ones LinearMotionBlur shape pad blur_mask_modifier image_denoise_power masked_hist_match blursharpen range color_hist_match BORDER_TRANSPARENT warpAffine sharpen_mode xseg_256_extract_func seamlessClone color_transfer_idt astype copy reinhard_color_transfer empty_like motion_deg INTER_CUBIC MORPH_ELLIPSE color_transfer_sot dilate GaussianBlur get_image_hull_mask NORMAL_CLONE uint8 int erode_mask_modifier getStructuringElement get_transform_mat float32 erode argwhere blursharpen_amount color_degrade_power landmarks_list concatenate astype float32 cv2_imread shape normalize_channels MergeMaskedFace filepath clip enumerate str parent cv2_imread locals __import__ globals dict setApplicationName setPalette QDarkPalette Path show str initialize log_info setWindowIcon addApplicationFont AA_EnableHighDpiScaling setStyle setFont gettempdir setAttribute QApplication path_contains app_icon MainWindow exec_ parent QFont raise_ AA_UseHighDpiPixmaps | <table align="center" border="0"> <tr><td colspan=2 align="center"> # DeepFaceLab <a href="https://arxiv.org/abs/2005.05535"> <img src="https://static.arxiv.org/static/browse/0.3.0/images/icons/favicon.ico" width=14></img> https://arxiv.org/abs/2005.05535</a> ### the leading software for creating deepfakes <img src="doc/DFL_welcome.png" align="center"> </td></tr> <tr><td colspan=2 align="center"> | 1,527 |
belph/wiki-sem-500 | ['sentiment analysis', 'outlier detection', 'word embeddings'] | ['Automated Generation of Multilingual Clusters for the Evaluation of Distributed Representations'] | install_dependencies.py src/lib/polyglot/mapping/embeddings.py src/lib/polyglot/mapping/tests/test_embeddings.py src/lib/polyglot/base.py src/tests/test_evaluator.py src/tests/test_embedding.py evaluate.py src/tests/utils_tests.py src/lib/polyglot/utils.py src/evaluator.py src/lib/polyglot/mapping/expansion.py src/lib/polyglot/mapping/base.py src/lib/polyglot/mapping/tests/test_expansion.py src/lib/polyglot/mapping/__init__.py src/utils.py src/outlier_test_group.py src/embeddings.py src/tests/test_test_group.py run_tests.py src/lib/polyglot/__init__.py read_dataset_directory score_embedding install_dependencies WrappedEmbedding Embedding phrase_gen Evaluator ResolvedTestGroup TestGroup decode similarity TextFiles Sequence TextFile TokenSequence _decode _open _pickle_method pretty_list _print _unpickle_method VocabularyBase CountedVocabulary OrderedVocabulary count Embedding VocabExpander DigitExpander CaseExpander EmbeddingTest MixedExpansionTest CaseExpanderTest DigitExpanderTest EmbeddingTest EvaluatorTest TestGroupTest nop EmbeddingTestCase get_test_vectors scandir num_total_groups evaluate print num_cases Evaluator accuracy opp main range len splitext isinstance open print encode PY3 __self__ isinstance lstrip __class__ __name__ __mro__ str decode format append enumerate | This repository contains the WikiSem500 dataset described in "Automated Generation of Multilingual Clusters for the Evaluation of Distributed Representations" by Philip Blair, Yuval Merhav, and Joel Barry. The test groups themselves can be found in `wiki-sem-500.tar.gz` (`wiki-sem-500-tokenized.tar.gz` is pre-tokenized). The structure of the archive is as follows: ``` wiki-sem-500 ├── de │ ├── Q101352.txt │ ├── Q105000.txt │ ├── Q1061151.txt │ ├── Q1065118.txt │ ... | 1,528 |
bemova/Building-Efficient-ConvNets-using-Redundant-Feature-Pruning | ['network pruning'] | ['Building Efficient ConvNets using Redundant Feature Pruning'] | pruning_package/pruner.py main.py pruning_package/model_compressor.py VGG4CIFAR10 train test PruneSimilarFilter replace_layers prune data max print model time format state_dict criterion backward print zero_grad SGD range test parameters save step net CrossEntropyLoss enumerate data items list isinstance Sequential in_features out_channels Conv2d from_numpy out_features zeros numpy Linear | # Building-Efficient-ConvNets-using-Redundant-Feature-Pruning It is a Pytorch implementation of https://arxiv.org/pdf/1802.07653.pdf paper. Pytorch version: 0.4.0 | 1,529 |
ben-ix/AdaptiveTPOT | ['automl'] | ['An Adaptive and Near Parameter-free Evolutionary Computation Approach Towards True Automation in AutoML'] | tpot/base.py tpot/metrics.py tpot/config/classifier_sparse.py tpot/builtins/one_hot_encoder.py tpot/decorators.py tpot/tpot.py tpot/config/regressor.py tpot/operator_utils.py tpot/config/classifier.py tpot/_version.py tpot/config/regressor_light.py tpot/export_utils.py tpot/config/classifier_mdr.py tpot/config/regressor_sparse.py tpot/builtins/stacking_estimator.py tpot/config/classifier_light.py helpers.py tpot/config/regressor_mdr.py run_tpotclf.py tpot/builtins/combine_dfs.py tpot/gp_types.py tpot/gp_deap.py tpot/builtins/feature_set_selector.py tpot/builtins/feature_transformers.py tpot/driver.py tpot/builtins/zero_count.py run_kfold read_data args clf_score _categorical_features reg_score setup_output_dir run_and_time_estimator _to_numeric run main train_test_split write_file kfold_split handler TPOTBase _pre_test load_scoring_function float_range tpot_driver _read_data_file _get_arg_parser positive_integer main _print_args _indent export_pipeline generate_pipeline_code generate_export_pipeline_code get_by_name generate_import_code pipeline_code_wrapper _combine_dfs _starting_imports _process_operator expr_to_tree cxOnePoint mutate_random_individual mutShrink _wrapped_cross_val_score mutNodeReplacement pick_two_individuals_eligible_for_crossover adaptiveEa initialize_stats_dict mutInsert varOr Output_Array balanced_accuracy TPOTOperatorClassFactory Operator source_decode ARGTypeClassFactory ARGType set_sample_weight TPOTClassifier TPOTRegressor CombineDFs FeatureSetSelector ContinuousSelector CategoricalSelector auto_select_categorical_features _X_selected OneHotEncoder _transform_selected StackingEstimator ZeroCount int time clf_score reg_score predict fit append all unique values get_dummies replace reshape LabelEncoder NaN _to_numeric fit_transform read_csv values drop StratifiedKFold fn_to_run kfold_split read_data read_data seed dataset fold print seed str getcwd fold runtime makedirs print parse_args add_argument ArgumentParser hook_sigint int float add_argument ArgumentParser str sorted format items print INPUT_FILE read_csv pop rsplit format insert getcwd print import_module getattr score drop SCORING_FN export max list TARGET_NAME train_test_split OUTPUT_FILE load_scoring_function format reversed zip tpot_type keys items print _read_data_file _print_args fit parse_args tpot_driver generate_export_pipeline_code generate_import_code pipeline_code_wrapper expr_to_tree count append prim_to_list pop join sorted merge_imports _starting_imports keys root get_by_name len _process_operator format _process_operator format len format get_by_name extend export _combine_dfs append shuffle randint mutate len mutate_random_individual random set pick_two_individuals_eligible_for_crossover mate append range pop sum list evaluate record print stream select per_generation_function zip append initialize_stats_dict max range varOr defaultdict ret shuffle append searchSubtree enumerate list product args insert clone shuffle searchSubtree range enumerate len clone shuffle append searchSubtree range enumerate list term product args insert clone shuffle isclass searchSubtree range enumerate len check_scoring list build_graph indexable check_cv Delayed set_sample_weight steps split list set append float sum pop join format exec eval startswith split count items sorted format issubclass list tuple source_decode ARGTypeClassFactory append type keys append issparse range unique zeros sum arange logical_not _X_selected check_array transform | # Adaptive TPOT This is an (independent) modification of TPOT designed to be largely free of evolutionary hyperparameters. Specificially, the population begins as a single randomly chosen estimator, and pipelines are evolved automatically **without** needing to specify - Population size - Offspring Size - Mutation Rate - Crossover Rate - Reproduction Rate The goal is towards further automation in AutoML. The main changes can be found in [adaptiveEa](https://github.com/ben-ix/tpot-adaptive/blob/master/tpot/gp_deap.py#L178), which is | 1,530 |
benatorc/PA-Graph-Transformer | ['molecular property prediction'] | ['Path-Augmented Graph Transformer Network'] | parse/split_data.py utils/path_utils.py modules/conv_layer.py train/train_ring.py utils/data_utils_test.py models/atom_predictor.py datasets/mol_dataset.py train/train_base.py parse/generate_ring_data.py graph/mol_graph.py preprocess/shortest_paths.py utils/train_utils.py utils/write_utils.py utils/data_utils.py arguments.py utils/test_path_utils.py modules/attention.py preprocess/test_shortest_paths.py train/train_prop.py models/prop_predictor.py utils/model_utils.py models/mol_transformer.py models/mol_conv_net.py graph/mol_features.py create_dirs get_args write_args MolDataset get_loader bt_index_to_float get_bt_feature get_bond_features get_path_bond_feature get_atom_features onek_unk_encoding get_bt_index Bond MolGraph Atom Molecule AtomPredictor MolConvNet MolTransformer PropPredictor AttentionLayer GraphConv generate_ring_data test read_smiles generate_complex_ring_data is_conjugated_path generate_ring_position_data main ordered_pair main split_data read_mol_smiles main get_ring_paths ordered_pair get_shortest_paths parse_mol test_fused_ring test_simple_aro_ring test_simple assert_dict_equal test_complex test_simple_ring train_model test_model load_datasets init_model get_test_loader main run_epoch write_ring_output load_datasets init_model get_test_loader main run_epoch read_smiles_from_dir map_equiv stats_tracker load_shortest_paths read_smiles_ring_data read_splits read_smiles_from_file read_smiles_multiclass create_dir_if_not_exists dict_to_pstr dict_to_dstr test_stats_tracker convert_to_2D compute_max_atoms convert_to_3D merge_path_inputs get_num_path_features get_path_features get_ring_features get_path_input get_path_atoms ordered_pair test_path_input get_args compute_acc compute_auc backprop_grads get_grad_norm write_props print create_dirs add_argument write_args ArgumentParser device output_dir parse_args create_dir_if_not_exists items sorted write close output_dir open DataLoader MolDataset batch_size fc symbol degree exp_valence onek_unk_encoding is_dummy imp_valence len onek_unk_encoding bond_type onek_unk_encoding GetBondType append readlines open MolFromSmiles list sample ordered_pair range len MolFromSmiles list GetShortestPath sample get_non_ring_neighbor append GetIsAromatic ordered_pair range len enumerate GetBondBetweenAtoms MolFromSmiles GetShortestPath get_neigh_if_conjugated sample is_conjugated_path append randint ordered_pair GetNumAtoms range enumerate generate_complex_ring_data generate_ring_data list items n_samples add_argument write close tqdm read_smiles data_path generate_complex_ring_data n_pos generate_ring_position_data n_neg ArgumentParser output_dir parse_args open floor len read_smiles_ring_data rings range shuffle n_splits read_smiles_multiclass multi enumerate join read_smiles_from_file split_data len append ordered_pair enumerate len get_atom_frag GetShortestPath get_ring_paths GetNumAtoms range GetMolFrags len append readlines close open MolFromSmiles dump read_mol_smiles print set_trace tqdm get_shortest_paths open data_dir max_path_length parse_mol items list assert_dict_equal MolFromSmiles get_shortest_paths MolFromSmiles get_shortest_paths MolFromSmiles get_shortest_paths GetNumAtoms range MolFromSmiles get_shortest_paths GetNumAtoms range MolFromSmiles get_shortest_paths load train_func join state_dict print write close range model_dir num_epochs load_state_dict output_dir save dict_to_pstr dict_to_dstr open load print load_state_dict dict_to_pstr Adam PropPredictor parameters device to read_splits get_loader read_splits get_loader data init_model test_mode n_rounds test_model exit append get_args train_model load_datasets load_shortest_paths create_dirs print array zero_grad device tensor abs open squeeze write_props add_stat append to backprop_grads cat use_paths stats_tracker batch_splits compute_auc close mean item compute_acc enumerate MolGraph backward tqdm numpy len AtomPredictor n_classes n_classes list cross_entropy zip float write_ring_output int write enumerate item readlines open readlines close open append split append readlines open int readlines open append split read_smiles_from_file data load print open makedirs sorted keys items list get_stats add_stat stats_tracker byte ones stack repeat unsqueeze append range narrow cat append narrow cat enumerate get_num_path_features view concatenate get_path_features reshape get_ring_features ordered_pair stack get_path_atoms ring_embed tensor p_embed range append enumerate zeros tensor get_num_path_features enumerate ring_embed N_BOND_FEATS p_embed max_path_length GetBondBetweenAtoms get_path_bond_feature append zeros range len zeros merge_path_inputs get_args squeeze get_path_input max norm set_trace named_parameters clip_grad_norm_ get_grad_norm parameters add_stat max_grad_norm step mean argmax astype roc_auc_score write range len | # Path-Augmented Graph Transformer Network This is the github repo for the paper "Path-Augmented Graph Transformer Network" All data (and splits used for experiments are under data.zip) These are the require packages and set up for a conda environment (can be slightly different depending on system). ``` conda create -c rdkit -n prop_predictor rdkit source activate prop_predictor conda install pytorch torchvision cudatoolkit=10.0 -c pytorch conda install scikit-learn tqdm ``` | 1,531 |
benchoi93/TrajGAIL | ['imitation learning'] | ['TrajGAIL: Generating Urban Vehicle Trajectories using Generative Adversarial Imitation Learning'] | models/utils/utils.py models/gail/network_models/q_net.py models/gail/algo/vanilla_pytorch.py models/gail/algo/infogail_train.py scripts/behavior_clone/run_bc_mmc.py models/behavior_clone/mmc_predictor.py models/gail/algo/ppo_pytorch.py models/utils/plotutils.py test_sequencelevel.py models/gail/network_models/discriminator_pytorch.py models/irl/algo/value_iteration.py scripts/behavior_clone/run_bc_rnn.py models/gail/network_models/discriminator_rnn.py scripts/gail/run_gail.py mdp/shortestpath.py models/gail/network_models/discriminator_wgail.py models/gail/algo/trainer.py models/gail/algo/gailrnn_pytorch.py models/irl/models/maxent_irl.py models/gail/network_models/policy_net_rnn.py models/gail/network_models/infoq_rnn.py main.py models/gail/network_models/policy_net_pytorch.py models/irl/models/maxent_irl_stateaction.py models/behavior_clone/rnn_predictor.py scripts/irl/demo_shortestpath.py models/gail/algo/gailrnn_ppo.py test_meteor test_bleu argparser ShortestPath MMC_predictor RNN_predictor GAILRNNTrain sequence_data INFOGAILTrain PPOTrain atanh Trainer atanh GAILTrain Discriminator Discriminator Discriminator infoQ sample_gumbel sample_gumbel_softmax infoQ_RNN Policy_net Value_net Policy_infoGAIL Value_net Value_infoGAIL Policy_net StateSeqEmb Posterior_net action_value_iteration value_iteration maxent_irl compute_state_visition_freq maxent_irl_stateaction plot_seperate_barchart plot_summary plot_linechart plot_summary_maxent plot_barchart model_summary_writer WeightClipper hard_update trajs_squeezedtensor identify_routes expert_compute_state_visitation_freq get_gae trajs_to_tensor unsqueeze_trajs sequence_data sequence_data_vanilla check_RouteID sigmoid compute_state_visitation_freq compute_state_action_visitation_freq expert_compute_state_action_visitation_freq normalize arr_to_tensor find_state argparser argparser main argparser argparser add_argument ArgumentParser clamp log sample exp sum sample_gumbel log len exp n_states max_actions transpose idx2pos policy_mask copy get_action_list uniform vstack zeros sum max range len T exp n_states max_actions transpose idx2pos policy_mask copy get_action_list uniform vstack pos2idx zeros sum max range len n_states idx2pos get_action_list zeros sum range len format value_iteration print compute_state_visition_freq dot mean uniform zeros range len format n_states max_actions print policy_mask dot action_value_iteration uniform compute_state_action_visitation_freq mean expert_compute_state_action_visitation_freq sum range subplots add_figure zeros_like float64 where DataFrame max values list plot_linechart set_xlabel map scatter append sort_values sum range jensenshannon reset_index plot identify_routes astype unique plot_barchart float keys summary_cnt set_ylabel array add_scalar subplots add_figure float64 DataFrame list plot_linechart set_xlabel scatter sum destinations sort_values range jensenshannon plot identify_routes astype unique plot_barchart float add_scalar summary_cnt set_ylabel array origins len list set_size_inches subplots arange plot set_xlabel rainbow reversed set_ylabel linspace legend ceil xticks range len set_size_inches subplots arange set_xlabel rainbow bar set_ylabel linspace legend ceil xticks range len set_size_inches subplots arange set_title set_xlabel rainbow bar set_ylabel savefig linspace range len list ones_like zeros_like reversed mean Tensor std range copy_ parameters data zip ones zeros sum max range list ones map array int32 max range len array append max range len find_state LongTensor join index min max join list sorted len append keys range split zeros len n_states max_actions len get_action_list index cur_state pos2idx zeros action n_states idx2pos get_action_list zeros sum range len n_states max_actions identify_routes idx2pos get_action_list start pos2idx zeros sum max range len data hard_update ShortestPath randint wasser gangnam Policy_net device cuda n_states plot_summary ones set_device RMSprop to sum train_wasser_discrim_step range format pretrain find_state unroll_trajectory2 mean GAILRNNTrain take StateSeqEmb is_available train arr_to_tensor states int time Value_net add_scalar print summary_cnt import_demonstrations Discriminator_rnn trajs_to_tensor parameters iteration Discriminator_wgail zeros n_actions vectorize array origins len | # TrajGAIL ### Introduction Generative model for urban vehicle trajectories based on Deep Learning This repository include implementations of : - Markov Mobility Chain Model for next location prediction (Gambs et al. 2012) - RNN based trajectory generator (Choi et al. 2018) - MaxEnt inverse reinforcement learning (Ziebart et al. 2008) - TrajGAIL based on Generative Adversarial Imitation Learning (Ho et al. 2016, Choi et al. 2020) - ShortestPath World (MDP for routing imitations) ### Citations | 1,532 |
benedekrozemberczki/MUSAE | ['network embedding'] | ['Multi-scale Attributed Node Embedding'] | src/main.py src/musae.py src/walkers.py src/utils.py src/parser.py main MUSAE parameter_parser tab_printer load_graph create_documents load_features alias_draw SecondOrderRandomWalker FirstOrderRandomWalker alias_setup save_logs learn_embedding MUSAE do_sampling save_embedding add_argument ArgumentParser sorted print draw add_rows vars Texttable keys selfloop_edges tolist from_edgelist remove_edges_from read_csv load open pop len append zeros enumerate int rand floor len | MUSAE ============ [](https://arxiv.org/abs/1909.13021) [](https://codebeat.co/projects/github-com-benedekrozemberczki-musae-master) [](https://github.com/benedekrozemberczki/MUSAE/archive/master.zip) [](https://twitter.com/intent/follow?screen_name=benrozemberczki) The reference implementation of **Multi-Scale Attributed Node Embedding. (Journal of Complex Networks 2021)** <p align="center"> <img width="800" src="musae.jpg"> </p> ### Abstract <p align="justify"> We present network embedding algorithms that capture information about a node from the local distribution over node attributes around it, as observed over random walks following an approach similar to Skip-gram. Observations from neighborhoods of different sizes are either pooled (AE) or encoded distinctly in a multi-scale approach (MUSAE). Capturing attribute-neighborhood relationships over multiple scales is useful for a diverse range of applications, including latent feature identification across disconnected networks with similar attributes. We prove theoretically that matrices of node-feature pointwise mutual information are implicitly factorized by the embeddings. Experiments show that our algorithms are robust, computationally efficient and outperform comparable models on social, web and citation network datasets.</p> | 1,533 |
benedekrozemberczki/role2vec | ['network embedding'] | ['Learning Role-based Graph Embeddings'] | src/main.py src/param_parser.py src/walkers.py src/role2vec.py src/weisfeiler_lehman_labeling.py src/utils.py src/motif_count.py main parameter_parser Role2Vec tab_printer load_graph create_documents alias_draw SecondOrderRandomWalker FirstOrderRandomWalker alias_setup WeisfeilerLehmanMachine Role2Vec do_walks learn_embedding tab_printer create_structural_features save_embedding add_argument ArgumentParser sorted print draw add_rows vars Texttable keys from_edgelist tolist selfloop_edges remove_edges_from pop len append zeros enumerate int rand floor len | Role2Vec ============================== [](https://arxiv.org/abs/1802.02896) [](https://codebeat.co/projects/github-com-benedekrozemberczki-role2vec-master) [](https://github.com/benedekrozemberczki/role2vec/archive/master.zip) [](https://twitter.com/intent/follow?screen_name=benrozemberczki) A scalable parallel **gensim** implementation of **Learning Role-based Graph Embeddings (IJCAI 2018)**. <p align="center"> <img width="500" src="orbit.png"> </p> -------------------- ### Abstract <p align="justify"> | 1,534 |
benedictquartey/RollE | ['autonomous driving'] | ['Affordable Modular Autonomous Vehicle Development Platform'] | deployment/on_vehicle_and_remote/scripts/data_collection.py deployment/on_computer/remotes/pilot_transmitter.py deployment/on_vehicle_and_remote/scripts/local_autopilot.py deployment/on_vehicle_and_remote/scripts/rover_actuation.py deployment/on_vehicle_and_remote/utils/comm.py deployment/on_computer/learning/data_processing.py deployment/on_computer/learning/train.py deployment/on_computer/learning/plots.py deployment/on_computer/utils/comm.py deployment/on_computer/remotes/soft_pilot.py deployment/on_computer/learning/model.py augment_data formatImage prepareData data_check loadImage cnn mean_square_error model_accuracy train_model batch_generator data_distribution_check main networking_init send_value send_stop_values main send_value init publisher_init publish_value defineClient subscriber_init connected message_in subscribe networking_init start init compile_data connected message_in networking_init send_value init main predict reset_steer networking_init steer throttle range_map connected message_in init stop steer_raw reset_throttle publisher_init publish_value defineClient subscriber_init connected message_in subscribe train_test_split read_csv values format ndarray isinstance print loadImage append read_csv values makedirs COLOR_RGB2YUV INTER_AREA cvtColor resize int astype choice loadImage int32 range flip clip Lambda Sequential add Dense Conv2D Flatten Dropout show plot xlabel ylabel title legend show plot xlabel ylabel title legend clip permutation augment_data formatImage empty loadImage show xlabel ylabel title hist figure prepareData batch_generator mean_square_error print model_accuracy fit_generator cnn history prepareData summary ModelCheckpoint compile publisher_init defineClient publish_value float send_value Serial str readline print networking_init send_value send_stop_values float find publisher_init defineClient clear addstr getkey nodelay init Client print str print connect print connect loop_forever topic str print payload strftime loop publish subscribe float print BROKER connect print PiCamera networking_init PiRGBArray sleep print DataFrame to_csv sleep format imwrite loop print init truncate capture_continuous next array append makedirs load_model float resize PiCamera INTER_AREA predict PiRGBArray sleep capture_continuous COLOR_RGB2YUV next array truncate cvtColor reset_steer reset_throttle range_map set_pwm print steer set_pwm range_map set_pwm set_pwm sleep steer throttle loop_forever set_pwm | # RollE [Affordable Modular Autonomous Vehicle Development Platform] Every year, 1.27 million people die in road accidents, 90% of which are down to human error. In 2015 there were 256,179 victims in Africa alone. RollE is an open-source programme to develop modular self-driving cars in a collaborative manner. Students and researchers have access to this technology to test their ideas and implement algorithms for autonomous driving, using learning and control techniques which are similar to those used in the automotive industry, but without having to take on the elevated relative costs. * [Affordable Modular Autonomous Vehicle Development Platform](https://benedictquartey.com/assets/downloads/8506757.pdf) - Paper on RollE published by IEEE * [Data Collection and Autonomy Demonstration of RollE](https://www.youtube.com/watch?v=1iLejcGQvJw) - Video demonstration of RollE ## Getting Started Head to the [Wiki](https://github.com/benedictquartey/RollE/wiki) page for instructions to get you setup. ## Contributing Contributions are welcome and enccouraged, create a pull request to submit contributions. ## Author * **Benedict Quartey** | 1,535 |
bennycheung/PyDeepStyle | ['style transfer'] | ['A Neural Algorithm of Artistic Style'] | deepstyle.py deprocess_image Evaluator gram_matrix eval_loss_and_grads content_loss total_variation_loss preprocess_image style_loss expand_dims preprocess_input img_to_array load_img reshape transpose astype dot transpose batch_flatten permute_dimensions gram_matrix square reshape astype f_outputs | # PyDeepStyle PyDeepStyle is originated from Keras <https://keras.io> example implementation of neural style transfer to create "deep" and impressive image (The original paper "A Neural Algorithm of Artistic Style" can be found at <https://arxiv.org/abs/1508.06576>) We shall try to run an algorithm for combining the content of one image with the style of another image using pre-trained convolutional neural networks. Here's an example that maps the artistic style of Kandinsky's " TranverseLine" onto Ralph McQuarrie's "Robot Dreams" to create an unique artistic image. Obviously, that hair style is destined to harmonize with Vangogh's curvy strokes.  The full installation instruction (on Windows 10) can follow the Blog post at: <http://bennycheung.github.io/deep-learning-on-windows-10> ## deepstyle.py > The script has been updated to work with latest Keras 2.1.6 [2018/05/08] The python script is `deepstyle.py` is the Keras implementation of the neural style transfer algorithm, using a pre-trained convolutional neural network (VGG19). The `run.sh` bash script takes your input {content_image}, {style_image} and {output_directory} for generating the results. | 1,536 |
berlino/weaksp_em19 | ['semantic parsing'] | ['Learning Semantic Parsers from Denotations with Latent Structured Alignments and Abstract Programs'] | wikitable/sempar/action_walker.py wikisql/trainer/util.py wikisql/train_seq.py wikisql/module/seq2seq.py wikitable/sempar/domain_languages/wikitable_language.py wikitable/model/seq.py wikisql/sempar/util.py wikitable/sempar/context/__init__.py wikitable/sempar/worlds/wikitable_world.py wikisql/reader/reader.py wikitable/sempar/executors/wikitable_executor.py wikisql/model/struct.py wikitable/sempar/domain_languages/wikitable_abstract_language.py wikitable/sempar/type_declarations/wikitable_types.py wikisql/sempar/action_walker.py wikitable/trainer/util.py wikitable/module/sketch_generator.py wikitable/module/sparse.py wikitable/module/lattice.py wikitable/sempar/evaluator.py wikitable/reader/util.py wikisql/model/baseline.py wikisql/scripts/cache_lf.py wikitable/model/struct.py wikisql/model/seq.py wikitable/scripts/cache_lf.py wikisql/scripts/eval_coverage.py wikitable/model/util.py wikitable/scripts/eval_coverage.py wikisql/sempar/context/wikisql_context.py wikitable/sempar/context/table_question_context_legacy.py wikitable/model/baseline.py wikitable/module/linked_seq2seq.py wikitable/sempar/context/table_question_context.py wikitable/module/slot_filler.py wikisql/sempar/domain_languages/wikisql_language.py wikisql/module/lattice.py wikitable/sempar/util.py wikitable/train_seq.py wikitable/reader/reader.py wikisql/model/util.py setup.py wikitable/sempar/abstract_action_walker.py wikisql/reader/util.py wikitable/train_config/train_seq_config.py wikisql/train_config/train_seq_config.py wikitable/module/seq2seq.py test_epoch ReaderUnpickler run_epoch log_sum_exp Programmer log_sum_exp SeqProgrammer StructProgrammer log_sum_exp construct_row_selections construct_same construct_junction log_sum_exp_2d log_sum_exp Lattice Seq2Seq WSReader load_jsonl_table load_jsonl load_actions coverage ActionSpaceWalker gen_slot2action_dic check_multi_col get_right_side_parts get_left_side_part WikiSQLContext StringColumn WikiSQLLanguage NumberColumn Column Row set_seed get_sketch_prod_and_slot clip_model_grad get_sketch_prod weight_init filter_sketches create_opt Config test_epoch ReaderUnpickler run_epoch log_sum_exp Programmer log_sum_exp SeqProgrammer StructProgrammer log_sum_exp construct_row_selections construct_same construct_junction log_sum_exp_2d log_sum_exp Lattice LinkedSeq2Seq Seq2Seq Sketcher SlotFiller SparsemaxFunction LogSparsemax _make_ix_like _threshold_and_support Sparsemax WTReader load_jsonl_table load_jsonl load_actions coverage AbstractWalker ActionSpaceWalker StringValue DateValue check_denotation Value NumberValue target_values_map main normalize to_value to_value_list tsv_unescape_list tsv_unescape gen_slot2action_dic check_multi_col get_right_side_parts get_left_side_part TableQuestionContext Date TableQuestionContext StringColumn ComparableColumn Column NumberColumn WikiTableAbstractLanguage DateColumn Row StringColumn ComparableColumn Date WikiTablesLanguage NumberColumn Column DateColumn Row WikiTablesVariableFreeExecutor Date WikiTablesVariableFreeWorld set_seed get_sketch_prod_and_slot clip_model_grad get_sketch_prod weight_init filter_sketches create_opt Config backward clip_model_grad zero_grad programmer clip_norm tqdm read_from_json take_features info train step len tqdm read_from_json take_features eval info len mean log_softmax stack isinstance append mean log_softmax dict set defaultdict ActionSpaceWalker action_sequence_to_logical_form get_logical_forms_by_sketches print WikiSQLLanguage close tqdm read_from_json take_features evaluate_logical_form append open split split len reversed dict append range get_right_side_parts split gen_slot2action_dic get_nonterminal_productions get_left_side_part append MultiStepLR Adagrad Adam SGD data constant_ ConvTranspose3d BatchNorm3d Conv3d normal_ xavier_normal_ BatchNorm1d GRUCell GRU BatchNorm2d ConvTranspose1d LSTMCell Conv1d ConvTranspose2d isinstance orthogonal_ Conv2d parameters LSTM parameters clip_grad_norm_ seed manual_seed items list defaultdict _get_sketch_productions action_sequence_to_logical_form tuple WikiSQLLanguage set read_from_json take_features dict get_nonterminal_productions append union enumerate split print set defaultdict add items list _get_sketch_productions WikiSQLLanguage set read_from_json take_features get_nonterminal_productions union read_from_lines take_corenlp_entities read_from_lines take_corenlp_entities size dim arange cumsum sort _make_ix_like unsqueeze gather read_from_lines WikiTableAbstractLanguage take_corenlp_entities join strip sub parse isinstance join tagged_dataset_path print add_argument round ArgumentParser prediction_path parse_args listdir len to_value_list tsv_unescape_list read_from_lines WikiTableAbstractLanguage take_corenlp_entities read_from_lines WikiTableAbstractLanguage take_corenlp_entities | berlino/weaksp_em19 | 1,537 |
bermanmaxim/LovaszSoftmax | ['semantic segmentation'] | ['The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks', 'The Lovász-Softmax Loss: A Tractable Surrogate for the Optimization of the Intersection-Over-Union Measure in Neural Networks'] | pytorch/lovasz_losses.py demo_helpers/demo_utils_tf.py demo_helpers/demo_utils.py tensorflow/lovasz_losses_tf.py printoptions pil paletteVOC dummy_triangles pil_grid define_scope doublewrap lovasz_grad flatten_binary_scores iou binary_xloss xloss lovasz_hinge_flat StableBCELoss lovasz_hinge lovasz_softmax_flat isnan mean flatten_probas lovasz_softmax iou_binary lovasz_grad flatten_binary_scores lovasz_hinge_flat lovasz_hinge lovasz_softmax_flat flatten_probas lovasz_softmax zeros bitget array range fromarray putpalette paletteVOC new min len paste max enumerate Draw new paletteVOC putpalette polygon get_printoptions set_printoptions __name__ cumsum sum len mean zip append float sum zip append float sum range mean lovasz_hinge_flat data lovasz_grad relu Variable sort dot float view Variable float flatten_binary_scores mean lovasz_softmax_flat data lovasz_grad Variable sort size dot append float abs size view filterfalse next iter enumerate concat reduce_sum reduce_mean map_fn equal cond reshape boolean_mask not_equal reduce_mean map_fn dtype stop_gradient boolean_mask stack reduce_mean cast top_k gather equal tensordot reshape transpose boolean_mask not_equal | # The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks <img src="https://cdn.rawgit.com/bermanmaxim/bermanmaxim.github.io/5edecd41/single_LSimage.jpg" height="180"> Maxim Berman, Amal Rannen Triki, Matthew B. Blaschko ESAT-PSI, KU Leuven, Belgium. Published in CVPR 2018. See [project page](http://bmax.im/LovaszSoftmax), [arxiv paper](https://arxiv.org/abs/1705.08790), [paper on CVF open access](http://openaccess.thecvf.com/content_cvpr_2018/html/Berman_The_LovaSz-Softmax_Loss_CVPR_2018_paper.html). ## PyTorch implementation of the loss layer (*pytorch* folder) **Files included:** * **lovasz_losses.py**: Standalone PyTorch implementation of the Lovász hinge and Lovász-Softmax for the Jaccard index * **demo_binary.ipynb**: Jupyter notebook showcasing binary training of a linear model, with the Lovász Hinge and with the Lovász-Sigmoid. * **demo_multiclass.ipynb**: Jupyter notebook showcasing multiclass training of a linear model with the Lovász-Softmax | 1,538 |
bermanmaxim/superpixPool | ['semantic segmentation'] | ['Efficient semantic image segmentation with superpixel pooling'] | pytorch_superpixpool/suppixpool_orig.py chainer_supvoxpool/supvoxpool.py pytorch_superpixpool/setup.py pytorch_superpixpool/test_GPUpool.py pytorch_superpixpool/suppixpool_layer.py SupVoxPoolNumba_avg maximum_numba SupVoxPoolNumba SupVoxPoolNdImage SupVoxPoolNdImage_avg SupVoxPoolGPU_v2 SupVoxPoolGPU SupVoxPoolGPU_avg average_numba SupPixPool SupPixUnpool SupPixPoolFunction SupVoxPoolNumba_avg maximum_numba SupVoxPoolNumba SupVoxPoolNdImage SupVoxPoolNdImage_avg SupVoxPoolGPU_v2 SupVoxPoolGPU SupVoxPoolGPU_avg average_numba ndindex zeros divide shape ones shape ndindex zeros max | # superpixPool Superpixel Pooling implemented in PyTorch and Chainer  This code is the product of a Masters thesis done by Mathijs Schuurmans under supervision by Maxim Berman and Matthew Blaschko, at the center for processing speech and images, Dept. of Electrical Engineering, KU Leuven. It includes CUDA implementation of superpixel pooling with integration in the Chainer and PyTorch deep learning framworks. Experimental details related to this work are given in [https://arxiv.org/abs/1806.02705](https://arxiv.org/abs/1806.02705). Please cite that article if you use this code in your research. At the moment the chainer version implements a 3d pooling layer, and the pytorch version a 2d pooling layer. See the PyTorch and Chainer subfolders for implementation of the pooling layer in these respective frameworks. **WARNING:** we are not actively maintaining this package. Hopefully it can be useful to some (perhaps as a starting point) but it is difficult for us to keep up with ensuring compatibility among compilers/python/pytorch versions. | 1,539 |
beskal80/KHAIREPO | ['face swapping'] | ['DeepFaceLab: Integrated, flexible and extensible face-swapping framework'] | models/Model_XSeg/Model.py mainscripts/VideoEd.py XSegEditor/QImageDB.py merger/MergerScreen/MergerScreen.py mainscripts/dev_misc.py merger/InteractiveMergerSubprocessor.py samplelib/SampleGeneratorImageTemporal.py core/leras/models/__init__.py samplelib/Sample.py core/joblib/__init__.py core/imagelib/estimate_sharpness.py core/joblib/MPFunc.py core/leras/layers/DenseNorm.py core/leras/nn.py samplelib/SampleProcessor.py core/leras/archis/__init__.py core/leras/layers/Conv2D.py core/stdex.py core/imagelib/text.py core/leras/models/PatchDiscriminator.py core/leras/__init__.py models/Model_XSeg/__init__.py core/leras/layers/DepthwiseConv2D.py core/imagelib/equalize_and_stack_square.py samplelib/__init__.py core/imagelib/sd/draw.py core/randomex.py core/joblib/MPClassFuncOnDemand.py core/leras/optimizers/__init__.py core/leras/initializers/__init__.py mainscripts/Extractor.py mainscripts/Sorter.py core/mathlib/__init__.py core/leras/optimizers/OptimizerBase.py core/imagelib/draw.py samplelib/SampleGeneratorBase.py core/leras/layers/BatchNorm2D.py core/leras/layers/ScaleAdd.py XSegEditor/QCursorDB.py models/__init__.py core/joblib/SubprocessGenerator.py DFLIMG/DFLIMG.py core/leras/layers/__init__.py models/ModelBase.py facelib/FaceEnhancer.py core/joblib/ThisThreadGenerator.py XSegEditor/QIconDB.py core/imagelib/blursharpen.py merger/MergeMasked.py core/cv2ex.py models/Model_Quick96/__init__.py main.py core/leras/archis/ArchiBase.py localization/localization.py facelib/LandmarksProcessor.py core/leras/layers/Dense.py models/Model_SAEHD/__init__.py mainscripts/Merger.py mainscripts/XSegUtil.py core/imagelib/filters.py XSegEditor/QStringDB.py core/leras/initializers/CA.py core/imagelib/__init__.py core/leras/models/ModelBase.py samplelib/SampleGeneratorFaceTemporal.py core/leras/layers/InstanceNorm2D.py facelib/S3FDExtractor.py samplelib/SampleLoader.py samplelib/SampleGeneratorFaceXSeg.py core/qtex/qtex.py DFLIMG/__init__.py core/leras/optimizers/RMSprop.py core/structex.py core/leras/layers/BlurPool.py facelib/__init__.py core/leras/archis/DeepFakeArchi.py core/imagelib/SegIEPolys.py mainscripts/FacesetEnhancer.py merger/FrameInfo.py models/Model_SAEHD/Model.py core/mplib/__init__.py samplelib/SampleGeneratorFacePerson.py merger/MergerScreen/__init__.py core/imagelib/common.py core/imagelib/sd/__init__.py core/leras/layers/AdaIN.py core/leras/layers/LayerBase.py facelib/FANExtractor.py core/leras/models/XSeg.py samplelib/SampleGeneratorFace.py merger/MergeAvatar.py core/pathex.py core/leras/layers/Conv2DTranspose.py core/qtex/QXMainWindow.py merger/__init__.py core/imagelib/morph.py core/joblib/SubprocessorBase.py mainscripts/Trainer.py samplelib/PackedFaceset.py core/mathlib/umeyama.py mainscripts/Util.py core/interact/__init__.py core/leras/layers/FRNorm2D.py core/leras/layers/Saveable.py core/leras/layers/TLU.py facelib/XSegNet.py core/mplib/MPSharedList.py core/leras/models/CodeDiscriminator.py facelib/FaceType.py XSegEditor/XSegEditor.py merger/MergerConfig.py core/leras/ops/__init__.py core/interact/interact.py core/leras/device.py core/imagelib/sd/calc.py core/imagelib/reduce_colors.py samplelib/SampleGeneratorImage.py localization/__init__.py DFLIMG/DFLJPG.py core/qtex/__init__.py core/qtex/QSubprocessor.py core/qtex/QXIconButton.py samplelib/SampleGeneratorFaceCelebAMaskHQ.py core/imagelib/warp.py core/osex.py core/imagelib/color_transfer.py models/Model_Quick96/Model.py process_dev_test process_merge process_videoed_cut_video process_train process_faceset_enhancer process_xsegeditor process_xsegapply process_xsegremove process_xsegremovelabels process_videoed_video_from_sequence process_xsegfetch process_util process_extract fixPathAction process_videoed_extract_video process_sort bad_args process_videoed_denoise_image_sequence cv2_imwrite cv2_resize cv2_imread set_process_dpi_aware get_screen_size set_process_lowest_prio get_image_paths move_all_files write_bytes_safe get_first_file_by_stem get_image_unique_filestem_paths get_all_dir_names get_file_paths delete_all_files scantree get_paths get_all_dir_names_startswith random_normal suppress_stdout_stderr struct_unpack LinearMotionBlur blursharpen _scale_array color_transfer color_transfer_idt color_transfer_mkl reinhard_color_transfer lab_image_stats linear_color_transfer channel_hist_match color_transfer_mix color_transfer_sot color_hist_match overlay_alpha_image cut_odd_image normalize_channels draw_polygon draw_rect equalize_and_stack_square compute _calculate_sharpness_metric marziliano_method get_block_contrast _simple_thinning estimate_sharpness is_edge_block sobel apply_random_motion_blur apply_random_rgb_levels apply_random_hsv_shift apply_random_bilinear_resize apply_random_gaussian_blur morphTriangle morph_by_points applyAffineTransform reduce_colors SegIEPolys SegIEPolyType SegIEPoly get_text_image draw_text_lines draw_text _get_pil_font get_draw_text_lines warp_by_params gen_warp_params dist_to_edges random_circle_faded circle_faded InteractBase InteractColab InteractDesktop MPClassFuncOnDemand MPFunc SubprocessGenerator Subprocessor ThisThreadGenerator Devices Device nn ArchiBase DeepFakeArchi CAInitializerSubprocessor initializers AdaIN BatchNorm2D BlurPool Conv2D Conv2DTranspose Dense DenseNorm DepthwiseConv2D FRNorm2D InstanceNorm2D LayerBase Saveable ScaleAdd TLU CodeDiscriminator ModelBase UNetPatchDiscriminator PatchDiscriminator XSeg dssim concat average_gv_list resize2d_bilinear flatten rgb_to_lab resize2d_nearest space_to_depth tf_gradients random_binomial style_loss gelu init_weights tf_get_value upsample2d reshape_4D batch_set_value max_pool average_tensor_list gaussian_blur depth_to_space OptimizerBase RMSprop umeyama get_power_of_two rotationMatrixToEulerAngles polygon_area ArrayFillerSubprocessor MPSharedList IndexHost Index2DHost ListHost DictHostCli DictHost QSubprocessor QDarkPalette QActionEx QSize_to_np QImage_from_np QImage_to_np QPixmap_from_np QPoint_to_np QPoint_from_np QXIconButton QXMainWindow DFLIMG DFLJPG FaceEnhancer FaceType FANExtractor blur_image_hull_mask mirror_landmarks get_face_struct_mask estimate_pitch_yaw_roll convert_98_to_68 expand_eyebrows get_rect_from_landmarks get_transform_mat draw_rect_landmarks get_cmask transform_points estimate_averaged_yaw calc_face_pitch alpha_to_color get_image_eye_mask draw_landmarks get_image_hull_mask S3FDExtractor XSegNet dev_test_68 dev_test1 dev_resave_pngs extract_vggface2_dataset extract_umd_csv dev_segmented_trash process_folder FacesetEnhancerSubprocessor extract_video video_from_sequence denoise_image_sequence cut_video remove_xseg remove_xseg_labels apply_xseg fetch_xseg FrameInfo InteractiveMergerSubprocessor MergeFaceAvatar process_frame_info MergeMasked MergeMaskedFace MergerConfigMasked MergerConfigFaceAvatar MergerConfig ScreenManager ScreenAssets Screen ModelBase PreviewHistoryWriter import_model QModel SAEHDModel XSegModel PackedFaceset Sample SampleType SampleGeneratorBase SampleGeneratorFace SampleGeneratorFaceCelebAMaskHQ MaskType SampleGeneratorFacePerson Index2DHost SampleGeneratorFaceTemporal SampleGeneratorFaceXSeg SegmentedSampleFilterSubprocessor SampleGeneratorImage SampleGeneratorImageTemporal FaceSamplesLoaderSubprocessor SampleLoader SampleProcessor QCursorDB QIconDB QImageDB QStringDB ImagePreviewSequenceBar QUIConfig QCanvasOperator LoaderQSubprocessor CanvasConfig OpMode QCanvas DragType ViewLock ColorScheme QCanvasControlsLeftBar start QCanvasControlsRightBar MainWindow PTEditMode main set_process_lowest_prio main set_process_lowest_prio unpack_faceset pack save_faceset_metadata log_info restore_faceset_metadata_folder pack_faceset save_faceset_metadata_folder restore_faceset_metadata Path input_dir unpack recover_original_aligned_filename set_process_lowest_prio add_landmarks_debug_images main set_process_lowest_prio main set_process_lowest_prio output_ext fps extract_video output_dir input_file set_process_lowest_prio audio_track_id from_time bitrate to_time cut_video input_file set_process_lowest_prio factor denoise_image_sequence set_process_lowest_prio input_dir video_from_sequence set_process_lowest_prio Path set_process_lowest_prio input_dir process_folder dev_test set_process_lowest_prio input_dir start Path set_process_lowest_prio input_dir model_dir apply_xseg Path input_dir set_process_lowest_prio Path remove_xseg set_process_lowest_prio input_dir remove_xseg_labels Path set_process_lowest_prio input_dir Path fetch_xseg set_process_lowest_prio input_dir print_help exit loader_func asarray bytearray imencode suffix shape normalize_channels resize nice SetPriorityClass HANDLE GetCurrentProcess SetProcessDPIAware user32 write_bytes parent name unlink rename exists is_dir scandir str list scandir any Path scantree exists append remove get_image_paths name stem set add verbose_print_func Path exists Path exists Path exists str list lower scandir Path startswith append exists str sorted list path lower scandir Path exists name Path rename get_file_paths unlink Path get_file_paths normal empty prod range calcsize warpAffine ones getRotationMatrix2D zeros sum medianBlur addWeighted ones zeros GaussianBlur max dtype reshape astype copy argsort shape bilateralFilter fill empty range eps T clip reshape eig dot shape sqrt cov mean diag T reshape min astype float32 empty_like solve dot shape histogram interp max range _scale_array uint8 astype float32 merge lab_image_stats COLOR_LAB2BGR cvtColor split T reshape transpose mean dot eigh eye cholesky split min max float64 astype shape unique interp ravel dtype astype shape channel_hist_match range uint8 astype float32 COLOR_BGR2LAB color_transfer_sot COLOR_LAB2BGR cvtColor uint8 color_transfer_idt color_transfer_mkl astype float32 reinhard_color_transfer linear_color_transfer color_transfer_sot clip shape repeat len shape shape range tuple line range len draw_polygon concatenate shape resize expand_dims max enumerate T convolve square mean sqrt array shape zeros float64 marziliano_method astype canny sobel gradient atan2 shape any zeros round range int exp slice get_block_contrast shape flipud round zeros is_edge_block rot90 range cvtColor COLOR_BGR2GRAY rand random clip array COLOR_HSV2BGR random merge COLOR_BGR2HSV randint clip cvtColor split LinearMotionBlur randint random randint GaussianBlur random int rand random shape resize float32 getAffineTransform float32 fillConvexPoly shape boundingRect int32 applyAffineTransform zeros expand_dims array shape morphTriangle zeros simplices fromarray uint8 convert astype COLOR_RGB2BGR array cvtColor truetype asarray Draw get_default_ttf_font_name concatenate text new _get_pil_font shape clip draw_text range len draw_text_lines zeros shape T random astype copy float32 getRotationMatrix2D dict uniform linspace random_normal warpAffine remap resize norm clip einsum concatenate norm reshape empty abs clip max random randint initializer inputs append batch_set_value run gradients expand_dims __enter__ __exit__ enumerate reduce_mean __enter__ __exit__ concat pow tanh sqrt pi as_list reshape tile transpose value resize transpose value resize transpose reshape transpose randint float32 pad make_kernel tile depthwise_conv2d gaussian_blur dtype constant arange reshape float32 square reduce_mean reducer cast softmax tile max as_list reshape transpose as_list reshape transpose constant reshape multiply matmul cast svd T ones matrix_rank mean dot eye sum diag sqrt atan2 shape Format_Grayscale8 Format_BGR888 Format_ARGB32 height reshape convertToFormat width constBits setsize range squeeze invertAffineTransform shape transform expand_dims get norm getAffineTransform polygon_area astype float32 transform_points sqrt estimate_averaged_yaw array transform_points FULL_NO_ALIGN get_transform_mat float32 array copy concatenate expand_eyebrows fillConvexPoly convexHull zeros int getStructuringElement astype fillConvexPoly MORPH_ELLIPSE convexHull dilate zeros GaussianBlur shape zeros concatenate process copy blend alpha_to_color zeros get_image_hull_mask gdf max clip int blur getStructuringElement min erode argwhere MORPH_ELLIPSE expand_dims copy draw_landmarks zeros expand_eyebrows concatenate polylines tuple shape get_image_hull_mask array circle get_transform_mat draw_rect transform_points draw_polygon draw_landmarks array array rotationMatrixToEulerAngles concatenate astype float32 pi solvePnP zeros array clip get pop get_image_paths parent log_info name stem progress_bar_generator get_all_dir_names Path mkdir run fromString split cv2_imread Path normalize_channels exists input_bool str log_info name stem append get_image_paths get_rect_from_landmarks unlink mkdir parent cv2_imwrite progress_bar_generator read_text split get str get_image_paths parent log_info name len unlink Path mkdir split log_err run range exists fromString input_bool get_image_paths progress_bar_generator get_all_dir_names Path x get_image_paths cv2_imwrite progress_bar_generator cv2_imread Path get_image_paths parent name stem rename Path mkdir append input_bool join get_image_paths log_info parent name copy unlink rmtree mkdir run update str get_image_paths parent input_str stem output get_first_file_by_stem unlink input_int mkdir Path log_err input run str suffix parent input_str stem overwrite_output input_int log_err Path input max run update str suffix parent progress_bar_generator output input_int rename log_err Path run clip enumerate suffix input_str wait input_int Path max input_bool str stem input update run_async get_image_paths close mkdir parent overwrite_output get_first_file_by_stem log_err probe load extract initialize get_image_paths log_info set_xseg_mask progress_bar_generator astype float32 get_resolution ask_choose_device shape XSegNet resize save load str get_image_paths log_info parent name has_polys progress_bar_generator copy get_seg_ie_polys mkdir load get_image_paths log_info set_xseg_mask input_str progress_bar_generator has_xseg_mask save load get_image_paths log_info input_str has_seg_ie_polys progress_bar_generator save set_seg_ie_polys warpAffine get_transform_mat astype float32 cv2_imread normalize_channels filename clip sharpen_func sharpen_mode concatenate predictor_func add_source_image process_frame_info temporal_face_count append range sharpen_amount predictor_func color_transfer_mkl motion_power bicubic_degrade_power motion_blur_power linear_color_transfer color_transfer_mix boundingRect resize reduce_colors max clip face_enhancer_func hist_match_threshold medianBlur super_resolution_power WARP_INVERSE_MAP ones LinearMotionBlur shape pad blur_mask_modifier image_denoise_power masked_hist_match blursharpen range color_hist_match BORDER_TRANSPARENT warpAffine sharpen_mode xseg_256_extract_func seamlessClone color_transfer_idt astype copy reinhard_color_transfer empty_like motion_deg INTER_CUBIC MORPH_ELLIPSE color_transfer_sot dilate GaussianBlur get_image_hull_mask NORMAL_CLONE uint8 int erode_mask_modifier getStructuringElement get_transform_mat float32 erode argwhere blursharpen_amount color_degrade_power landmarks_list concatenate astype float32 cv2_imread shape normalize_channels MergeMaskedFace filepath clip enumerate str parent cv2_imread locals __import__ globals dict setApplicationName setPalette QDarkPalette Path show str initialize log_info setWindowIcon addApplicationFont AA_EnableHighDpiScaling setStyle setFont gettempdir setAttribute QApplication path_contains app_icon MainWindow exec_ parent QFont raise_ AA_UseHighDpiPixmaps | <table align="center" border="0"> <tr><td colspan=2 align="center"> # DeepFaceLab <a href="https://arxiv.org/abs/2005.05535"> <img src="https://static.arxiv.org/static/browse/0.3.0/images/icons/favicon.ico" width=14></img> https://arxiv.org/abs/2005.05535</a> ### the leading software for creating deepfakes <img src="doc/DFL_welcome.png" align="center"> </td></tr> <tr><td colspan=2 align="center"> | 1,540 |
bethard/timenorm | ['semantic composition'] | ['A Semantically Compositional Annotation Scheme for Time Normalization'] | src/main/python/subToSuperInterval/analyze_duplicates.py src/main/python/model_training.py src/main/python/genranddates.py src/main/python/ruleLinking.py src/main/python/preprocess.py src/main/python/anafora_funct.py src/main/python/output.py src/main/python/read_files.py src/main/python/subToSuperInterval/sub_to_super_interval.py src/main/python/process_functions.py src/main/python/preprocess_functions.py get_types get_schema trainging load_hdf5 span2xmlfiles main sentence_length generate_output_multiclass get_train features_extraction get_sample_weights_multiclass document_level_2_sentence_level output_encoding get_list_name xml_tag_in_sentence create_class_weight main split_by_sentence get_idx_from_sent extract_xmltag_anafora extract_xmltag_anafora_pred word_pos_2_character_pos text_normalize get_unicode get_pos_sentence get_explict_label split_sentence_based_on_rules add_start_end rule_based_tokenizer tokenize_span addannotation_to_dict get_implict_label extract_xmltag_timeml get_words spans found_location_with_constraint get_gold_dict metrics evaluate make_prediction_function_multiclass prob2classes_multiclasses loc2span prob2classes_multiclasses_multioutput get_counts hot_vectors2class_index calculate_score pro2classes_binaryclass readfrom_txt textfile2list create_folder save_hdf5 movefiles counterList2Dict load_hdf5 readfrom_json readfrom_pickle savein_json savein_pickle process_doc get_relation main sub_to_super get parse dict findall bool split dict split close open list Bidirectional Explicit_operator Implicit_operator Gru_out_1 save Gru_out_2 Input set_weights ModelCheckpoint load_model Interval_output Model get_weights Gru_out_6 Gru_out_3 Dense Gru_out_4 compile print fit Gru_out_5 CSVLogger history summary LSTM makedirs str AnaforaEntity append AnaforaData indent append range found_location_with_constraint join list textfile2list create_folder counterList2Dict make_prediction_function_multiclass len loc2span span2xmlfiles dict readfrom_json to_file save append range savein_pickle enumerate int list load_model sentence_length divide range float round generate_output_multiclass len replace savein_json textfile2list readfrom_json savein_json list print len rule_based_tokenizer sent_tokenize append regexp_span_tokenize spans sorted list keys append append range count_nonzero items list asarray counterList2Dict dict zeros float sum range values enumerate append list create_class_weight readfrom_txt join list defaultdict text_normalize extract_xmltag_anafora sort xml_tag_in_sentence range split_by_sentence savein_json len join list asarray save_hdf5 print makedirs readfrom_json append range get_idx_from_sent len get_explict_label get_sample_weights_multiclass save textfile2list list defaultdict counterList2Dict get_implict_label append sum range asarray save_hdf5 set readfrom_json enumerate join int print repeat zeros len features_extraction output_encoding list word_tokenize append find append type items sorted text from_file annotations dict OrderedDict _tag_to_property_xml addannotation_to_dict items sorted from_file annotations dict OrderedDict addannotation_to_dict items sorted print from_file annotations dict OrderedDict append append list find append list print search split split_sentence_based_on_rules popleft add_start_end deque append spans list word_tokenize StanfordPOSTagger print tag append tokenize_span list print append list print match append bool range len append list category append list argmax makedirs prob2classes_multiclasses predict prob2classes_multiclasses_multioutput append list index append list range len append list len print items list items list get_counts print get_gold_dict join extract_xmltag_anafora_pred readfrom_txt metrics readfrom_json calculate_score range exists len join makedirs print close create_folder close create_folder read readfrom_txt list splitlines append create_folder File create_dataset range len create_folder replace copy dict rstrip endswith search open str list sorted map strftime title append parse get_relation Element close listdir pop join read remove text2num text makedirs write extend dict sub split findall find join sorted items print from_file annotations Counter walk join from_file annotations id to_file walk makedirs | # timenorm The timenorm library provides models for finding natural language expressions of dates and times and converting them to a normalized form. ## Text to time expressions with the neural parser The primary entry point for the library is the `TemporalNeuralParser` class, which implements a character-based recurrent neural network for finding and normalizing time expressions, as described in: > Egoitz Laparra, Dongfang Xu, and Steven Bethard. 2018. > [From Characters to Time Intervals: New Paradigms for Evaluation and Neural Parsing of Time Normalizations](https://www.mitpressjournals.org/doi/pdf/10.1162/tacl_a_00025). > In: Transactions of the Association for Computational Linguistics 2018, Vol. 6, pp. 343–356 | 1,541 |
bflashcp3f/PCNN-NMAR | ['relation extraction'] | ['Structured Minimally Supervised Learning for Neural Relation Extraction'] | train.py utils/kb_info.py utils/helper.py eval.py model/PCNN_NMAR.py utils/loader.py main main Label PCNN_NMAR sort_bags_by_size print_config check_dir getting_hamming_score get_MIT_MID_score get_entity_frequencty DataLoader pos_min_e1 model_dir DataLoader pos_max_e2 pos_min_e2 print_config ArgumentParser cuda list sorted sentential_eval pos_max_e1 bags_test index2word load_state_dict load_word2vec_format parse_args bags_train format glob heldout_eval vars enumerate load items print add_argument cpu syn0 PCNN_NMAR len float64 zero_grad SGD save get_MIT_MID_score step set_defaults PCNN_NMAR_model range train_bags_label shuffle float getting_hamming_score NLLLoss time backward now parameters check_dir print makedirs list items sorted print sorted list keys list get_entity_frequencty zeros keys range split ones list keys | Structured Minimally Supervised Learning for Neural Relation Extraction ========================= This repo contains the *pytorch* code for paper [Structured Minimally Supervised Learning for Neural Relation Extraction](https://arxiv.org/abs/1904.00118). @inproceedings{bai-ritter-2019-structured, title = "{S}tructured {M}inimally {S}upervised {L}earning for {N}eural {R}elation {E}xtraction", author = "Bai, Fan and Ritter, Alan", booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)", month = jun, year = "2019", | 1,542 |
bgenchel/MusicalSeqGAN | ['text generation'] | ['SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient'] | src/utils/data/datasets.py src/models/nottingham/discriminator.py src/models/charlie_parker/make_music.py src/models/bebop/main.py src/models/nottingham/gan_loss.py src/data/conversion/mxl_to_xml.py src/models/charlie_parker/rollout.py src/models/charlie_parker/data_iter.py src/models/nottingham/rollout.py src/utils/data/dataloaders.py src/scripts/get_data.py src/models/charlie_parker/eval.py src/models/charlie_parker/generator.py src/utils/constants.py src/models/bebop/gan_loss.py src/utils/reverse_pianoroll.py src/models/bebop/data_iter.py src/data/conversion/xml_to_json.py src/models/charlie_parker/discriminator.py src/models/bebop/eval.py src/models/nottingham/generator.py src/models/charlie_parker/gan_loss.py src/evaluation/toolkit.py src/models/bebop/discriminator.py src/data/parsing/harmony.py src/evaluation/bleu.py src/models/bebop/rollout.py src/models/nottingham/make_music.py src/models/bebop/generator.py src/models/nottingham/eval.py src/models/bebop/make_music.py src/models/charlie_parker/main.py src/models/nottingham/data_iter.py src/data/conversion/fix_fnames.py src/data/parsing/parser.py src/models/nottingham/main.py main harmony_timing xml_to_dict Harmony Parser TickParser PitchDurParser rotate BleuScore MGEval read_file GenDataset DscrDataset Discriminator bleu listify render_midi get_predictions GANLoss Generator create_generated_data_file get_subset_dataloader train_epoch main create_real_data_file eval_epoch main sequence_to_midi Rollout read_file GenDataset DscrDataset Discriminator bleu listify render_midi get_predictions GANLoss Generator create_generated_data_file get_subset_dataloader train_epoch main create_real_data_file eval_epoch main sequence_to_midi Rollout read_file GenDataset DscrDataset Discriminator bleu listify render_midi get_predictions GANLoss Generator create_generated_data_file get_subset_dataloader train_epoch main create_real_data_file eval_epoch main sequence_to_midi Rollout get_bebop get_nottingham cqt_to_piano_roll piano_roll_to_pretty_midi SplitDataLoader BebopTicksDataset call str join listdir parse getroot append append split format print evaluate_bleu_score BleuScore get_predictions join str list tqdm mkdir get_predictions range sequence_to_midi load join NottinghamDataset print Generator listify extend tqdm DataLoader load_state_dict is_available argmax cuda sample list range len tolist extend range list numpy extend iter view backward Variable step set_trace zero_grad tqdm loss_fn forward data create_generated_data_file gen_gan_loss DscrDataset zero_grad DataLoader save forward cuda open seed view GANLoss Generator len Adam strftime load_state_dict Discriminator append range state_dict dump LongTensor train_type get_subset_dataloader get_reward update_params sample float type create_real_data_file BebopTicksDataset eval_epoch Rollout NLLLoss deepcopy load backward print Variable contiguous makedirs train_epoch parameters Tensor step split eval write piano_roll_to_pretty_midi zeros range len BebopTicksDataset random_split NottinghamDataset join remove urlretrieve print mkdir exists join remove urlretrieve print mkdir exists T Instrument PrettyMIDI shape pad nonzero Note zip append zeros digitize min pad linspace abs max | # Curious-Musical-SeqGAN Adapt and evaluate SeqGAN (https://arxiv.org/abs/1609.05473) for music generation. The original paper briefly mentions that the model was trained for music generation on the Nottingham folk music dataset, and evaluated using BLEU score. Unfortunately, there was not much information given on how the model was adapted for this use case, and how they acheived the scores they reported. Additionally, code from the authors and others are implemented exclusively towards the central task used in the paper, learning to model a randomly initialized LSTM. Here, we attempt to adapt the model specifically for this purpose and give a clearer and more detailed evaluation on the task of music generation. **N.B. When cloning this repo, use the flag `--recursive` after the address, i.e. `git clone https://github.com/bgenchel/MusicalSeqGAN.git --recursive`.** Otherwise, run `git submodule update --init` in the project root. This is to fetch the git submodule for MGEval. ## Running SeqGAN on Nottingham First, create a conda environment based off of the included `environment.yml` file by running: `conda env create -f environment.yml` The rest of the commands should be done with this environment active. | 1,543 |
bgenchel/Reinforcement-Learning-for-Music-Generation | ['text generation'] | ['SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient'] | src/utils/data/datasets.py src/model/utils/gttm/elements/concrete_scale.py src/utils/training/__init__.py src/model/utils/training/generator_pretrainer.py src/utils/training/rollout.py src/model/utils/training/subset_dataloader.py src/utils/gttm/elements/attack_event_chord.py src/data/conversion/mxl_to_xml.py src/model/eval.py src/utils/gttm/elements/chord.py src/utils/gttm/elements/key.py src/model/utils/gttm/elements/rest_event.py src/utils/data/dataloaders.py src/scripts/get_data.py src/model/generator.py src/model/utils/training/discriminator_pretrainer.py src/model/discriminator.py src/data/build_sets.py src/model/utils/data/datasets.py src/model/utils/gttm/elements/chord.py src/model/utils/training/policy_update.py src/model/utils/training/rollout.py src/model/utils/training/__init__.py src/utils/training/adversarial_rl_trainer.py src/model/utils/training/data_iter.py src/utils/constants.py src/model/main.py src/model/utils/data/dataloaders.py src/model/utils/gttm/elements/enums.py src/utils/gttm/elements/scale.py src/utils/reverse_pianoroll.py src/utils/gttm/analyzer/utils.py src/data/conversion/xml_to_json.py src/model/make_music.py src/utils/gttm/elements/attack_event.py src/model/utils/constants.py src/model/utils/gttm/analyzer/utils.py src/evaluation/mgeval/mgeval/utils.py src/utils/gttm/elements/enums.py test.py src/evaluation/toolkit.py src/model/utils/gttm/elements/key.py src/data/parsing/harmony.py src/model/utils/gttm/elements/scale.py src/scripts/make_melody.py src/utils/training/subset_dataloader.py src/utils/training/gan_loss.py src/evaluation/bleu.py src/utils/training/generator_pretrainer.py src/model/utils/gttm/elements/attack_event.py src/utils/training/data_iter.py src/utils/gttm/elements/event.py src/data/conversion/fix_fnames.py src/scripts/gen4eval.py src/model/utils/training/adversarial_rl_trainer.py src/model/utils/gttm/elements/event.py src/model/utils/gttm/elements/event_stream.py src/utils/gttm/elements/concrete_scale.py src/data/parsing/parser.py src/utils/gttm/elements/event_stream.py src/model/utils/helpers.py src/model/utils/gttm/elements/attack_event_chord.py src/utils/training/discriminator_pretrainer.py src/evaluation/mgeval/mgeval/core.py src/model/utils/reverse_pianoroll.py src/utils/gttm/elements/rest_event.py C A B create_data_dict prepare_mle_hdf5 write_to_mle_hdf5 get_seqs_and_targets write_to_rl_hdf5 main prepare_rl_hdf5 main harmony_timing xml_to_dict Harmony shift_root_token rotate Parser TickParser PitchDurParser BleuScore calculate_metric MGEval main extract_feature metrics overlap_area c_dist kl_dist Discriminator bleu listify render_midi get_predictions Generator main prepare_vars create_generated_data_file create_real_data_file cqt_to_piano_roll piano_roll_to_pretty_midi SplitDataLoader HDF5Dataset AttackEvent AttackEventChord Chord ConcreteScale Event EventStream Key RestEvent Scale AdversarialRLTrainer read_file DscrDataset DiscriminatorPretrainer GeneratorPretrainer PolicyUpdate Rollout SubsetDataloaderFactory generate_songs get_bebop get_nottingham step convert_chords_to_piano_roll_mat convert_melody_to_piano_roll_mat roll cqt_to_piano_roll piano_roll_to_pretty_midi SplitDataLoader HDF5Dataset AttackEvent AttackEventChord Chord ConcreteScale Event EventStream Key RestEvent Scale AdversarialRLTrainer read_file GenDataset DscrDataset DiscriminatorPretrainer GANLoss GeneratorPretrainer Rollout SubsetDataloaderFactory CR_SEQS_KEY CT_SEQS_KEY SEQS_KEY print File create_dataset TARGETS_KEY CR_SEQS_KEY CT_SEQS_KEY SEQS_KEY print File create_dataset resize shape len shape resize append zeros range concatenate create_data_dict join prepare_mle_hdf5 write_to_mle_hdf5 get_seqs_and_targets print extend tqdm write_to_rl_hdf5 append dataset listdir enumerate prepare_rl_hdf5 makedirs call str parse getroot append intra_set_cross_validation intra_inter_difference inter_set_cross_validation get_metric visualize list items calculate_metric MGEval getcwd dirname gaussian_kde gaussian_kde min max linspace norm entropy wasserstein_distance zeros range len format print GEN_SEQ_LEN evaluate_bleu_score BleuScore get_predictions join str list tqdm mkdir get_predictions range sequence_to_midi join str print listify extend HDF5Dataset tqdm DataLoader load_state_dict prepare_vars argmax cuda DSCR_DROPOUT DSCR_NUM_CLASSES cuda seed Generator GEN_EMBED_DIM DSCR_NUM_FILTERS VOCAB_SIZE load_state_dict Discriminator to SEED DSCR_FILTER_LENGTHS DSCR_EMBED_DIM load GEN_HIDDEN_DIM train split Variable to cuda append list BATCH_SIZE print cpu tolist extend tqdm GEN_SEQ_LEN zip sample prepare_vars list print extend tqdm zip numpy T Instrument PrettyMIDI shape pad nonzero Note zip append zeros digitize min pad linspace abs max NONE join format print call enumerate join remove urlretrieve print mkdir exists join remove urlretrieve print mkdir exists zeros sum array range len zeros repeat enumerate dur_net pitch_net append split | # Reinforcement Learning for Music Generation **N.B. When cloning this repo, use the flag `--recursive` after the address, i.e. `git clone https://github.com/bgenchel/MusicalSeqGAN.git --recursive`.** Otherwise, run `git submodule update --init` in the project root. This is to fetch the git submodule for MGEval. ## Running SeqGAN on Nottingham First, create a conda environment based off of the included `environment.yml` file by running: `conda env create -f environment.yml` The rest of the commands should be done with this environment active. To fetch the datasets, run the following: - From project root, `cd src/scripts` - Then run `python get_data.py` | 1,544 |
bgshih/aster | ['optical character recognition', 'scene text recognition'] | ['ASTER: An Attentional Scene Text Recognizer with Flexible Rectification'] | core/convnet.py builders/feature_extractor_builder_test.py builders/preprocessor_builder_test.py builders/feature_extractor_builder.py builders/loss_builder_test.py eval_util.py convnets/crnn_net.py utils/model_deploy_test.py core/bidirectional_rnn.py core/preprocessor.py evaluator.py utils/variables_helper.py tools/create_svt_perspective_tfrecord.py utils/shape_utils.py builders/hyperparams_builder.py utils/visualization_utils.py utils/visualization_utils_test.py demo.py builders/convnet_builder_test.py tools/create_cute80_tfrecord.py builders/convnet_builder.py tools/create_iiit5k_tfrecord.py convnets/stn_convnet.py builders/rnn_cell_builder.py core/standard_fields.py core/spatial_transformer.py builders/predictor_builder_test.py core/prefetcher.py builders/rnn_cell_builder_test.py core/loss.py tools/create_svt_tfrecord.py tools/create_synth90k_tfrecord.py builders/input_reader_builder_test.py builders/spatial_transformer_builder_test.py convnets/resnet.py meta_architectures/multi_predictors_recognition_model.py builders/model_builder.py builders/attention_recognition_model_builder_test.py builders/label_map_builder_test.py builders/bidirectional_rnn_builder_test.py utils/dataset_util.py builders/optimizer_builder.py builders/ctc_recognition_model_builder_test.py core/feature_extractor.py tf2_utils/main.py tf2_utils/inferer.py data_decoders/tf_example_decoder.py trainer.py builders/label_map_builder.py utils/recognition_evaluation.py core/label_map.py builders/optimizer_builder_test.py builders/loss_builder.py c_ops/ops.py core/sync_attention_wrapper.py utils/profile_session_run_hooks.py core/batcher.py builders/preprocessor_builder.py c_ops/ops_test.py core/spatial_transformer_test.py builders/predictor_builder.py train.py builders/spatial_transformer_builder.py tools/create_synthtext_tfrecord.py tools/create_ic15_tfrecord.py builders/hyperparams_builder_test.py meta_architectures/ctc_recognition_model.py core/model.py builders/model_builder_test.py tools/create_ic03_tfrecord.py predictors/attention_predictor.py utils/model_deploy.py utils/learning_schedules.py builders/input_reader_builder.py tools/create_ic13_tfrecord.py eval.py builders/bidirectional_rnn_builder.py core/predictor.py main get_configs_from_exp_dir main get_configs_from_exp_dir get_configs_from_multiple_files get_configs_from_pipeline_file _extract_prediction_tensors evaluate run_checkpoint_once repeated_checkpoint_run visualize_recognition_results write_metrics evaluate_recognition_results main get_configs_from_exp_dir get_configs_from_multiple_files get_configs_from_pipeline_file _create_losses _create_input_queue train _get_inputs_multiqueues LabelMapTest AttentionRecognitionModelBuilderTest build BidirectionalRnnBuilderTest _build_resnet _build_crnn_net build _build_stn_resnet _build_stn_convnet FeatureExtractorTest CtcRecognitionModelBuilderTest build FeatureExtractorBuilderTest _build_batch_norm_params _build_regularizer build _build_initializer _build_activation_fn HyperparamsBuilderTest build InputReaderTest _build_character_set build build LossTest _build_multi_predictors_recognition_model build ModelBuilderTest _create_learning_rate build OptimizerBuilderTest LearningRateBuilderTest _build_language_model_rnn_cell build PredictorBuilderTest _get_dict_from_proto build _get_step_config_from_proto PreprocessorBuilderTest build RnnCellTest build SpatialTransformerBuilderTest CrnnNetThreeBranches CrnnNet CrnnNetMultiBranches CrnnNetTiny CrnnNetTwoBranches Resnet Resnet50Layer ResnetForSTN StnConvnetTiny StnConvnet BatchQueue StaticBidirectionalRnn DynamicBidirectionalRnn Convnet FeatureExtractor LabelMap L2RegressionLoss SequenceCrossEntropyLoss Model Predictor prefetch get_default_func_arg_map image_to_float random_adjust_brightness resize_image _random_integer _apply_with_random_selector normalize_image rgb_to_gray subtract_channel_mean random_rgb_to_gray random_adjust_hue resize_image_random_method random_adjust_saturation random_distort_color preprocess random_pixel_value_scale random_adjust_contrast string_filtering _apply_with_random_selector_tuples SpatialTransformer SpatialTransformerTest TfExampleFields InputDataFields SyncAttentionWrapper _load_oplib OpsTest TfExampleDecoder CtcRecognitionModel MultiPredictorsRecognitionModel ConcatOutputMultiRNNCell AttentionPredictor Inferer get_tokenizer infere_images load_image create_cute80 _is_difficult _random_lexicon create_ic03 create_ic13 _is_difficult _is_difficult char_check create_ic15 create_iiit5k_subset create_svt_perspective create_svt_subset main SynthTextCreator main recursive_parse_xml_to_dict int64_list_feature read_examples_list float_list_feature int64_feature bytes_feature bytes_list_feature manual_stepping exponential_decay_with_burnin _gather_clone_loss deploy _add_gradients_summaries _optimize_clone create_clones optimize_clones DeploymentConfig _sum_clones_gradients DeploymentConfigTest DeployTest OptimizeclonesTest BatchNormClassifier CreatecloneTest LogisticClassifier ProfileAtStepHook RecognitionEvaluation combined_static_and_dynamic_shape pad_or_clip_tensor _set_dim_0 clip_tensor _is_tensor pad_tensor filter_variables freeze_gradients_matching_regex get_variables_available_in_checkpoint multiply_gradients_matching_regex draw_bounding_boxes_on_image_array tile_activation_maps_rows_cols save_image_array_as_png encode_image_array_as_png_str draw_mask_on_image_array tile_activation_maps_max_dimensions draw_bounding_box_on_image_array draw_bounding_box_on_image draw_bounding_boxes_on_image draw_keypoints_on_image draw_keypoints_on_image_array encode_image_array_as_png_bytes visualize_boxes_and_labels_on_image_array VisualizationUtilsTest join model exp_dir eval_config eval_input_reader TrainEvalPipelineConfig decode Saver save to_float resize_images fromarray global_variables placeholder build dirname expand_dims predict postprocess get_configs_from_exp_dir format astype input_image decode_jpeg join uint8 print exp_dir train_config eval_training_data train_config model eval_training_data eval_config eval_input_reader TrainEvalPipelineConfig InputReader EvalConfig DetectionModel checkpoint_dir pipeline_config_path get_configs_from_multiple_files get_configs_from_pipeline_file eval_dir partial evaluate to_float postprocess update dequeue py_func shape preprocess string prefetch expand_dims create_input_dict_fn predict get_or_create_global_step global_variables use_moving_averages _extract_prediction_tensors FileWriter repeated_checkpoint_run close Saver ExponentialMovingAverage create_model_fn append variables_to_restore sorted info close FileWriter add_summary Summary restore write_graph list restore_fn latest_checkpoint graph_def close set Saver ConfigProto keys Session run time latest_checkpoint run_checkpoint_once strftime gmtime info sleep enumerate RecognitionEvaluation add_single_image_recognition_info len decode _normalize_text shape imshow scatter savefig Axes format imresize astype close add_axes info clear join uint8 set_axis_off figure makedirs train_input_reader train_input_reader TrainConfig Server target loads train_dir get ClusterSpec num_clones type index clone_on_cpu train len create_tensor_dict_fn preprocess to_float BatchQueue dequeue extend create_model_fn list values stack add_loss _get_inputs_multiqueues loss predict provide_groundtruth static rnn_regularizer _build_regularizer StaticBidirectionalRnn fw_bw_rnn_cell DynamicBidirectionalRnn fc_hyperparams WhichOneof CrnnNetThreeBranches CrnnNet conv_hyperparams build CrnnNetTiny CrnnNetTwoBranches conv_hyperparams build Resnet50Layer net_depth StnConvnetTiny StnConvnet FeatureExtractor convnet HasField batch_norm _build_batch_norm_params WhichOneof name WhichOneof tf_record_input_reader input_path parallel_read LabelMap _build_character_set character_set list text_file ascii_lowercase WhichOneof text_string ascii_letters digits split l2_regression_loss sequence_cross_entropy_loss spatial_transformer HasField build feature_extractor MultiPredictorsRecognitionModel learning_rate momentum_optimizer nadam_optimizer MomentumOptimizer adadelta_optimizer adam_optimizer MovingAverageOptimizer AdamOptimizer _create_learning_rate NadamOptimizer use_moving_average RMSPropOptimizer AdadeltaOptimizer rms_prop_optimizer learning_rate get_or_create_global_step initial_learning_rate manual_stepping manual_step_learning_rate decay_steps add decay_factor WhichOneof exponential_decay exponential_decay_learning_rate constant_learning_rate scalar _build_language_model_rnn_cell rnn_cell AttentionPredictor attention_predictor lm_rnn_cell loss label_map MultiRNNCell ListFields ListFields join include_charset _get_step_config_from_proto _get_dict_from_proto resize_image resize_image_random_method string_filtering initializer lstm_cell LSTMCell _build_initializer num_units GRUCell gru_cell to_float list add_queue_runner size PaddingFIFOQueue enqueue QueueRunner keys scalar random_uniform tuple func random_uniform append range len rgb_to_grayscale tile func get_default_func_arg_map zip join oplib_suffix format load_op_library copyfile realpath dirname print get_tokenizer Inferer load_image numpy inferer fit_on_texts Tokenizer join astype float32 resize imread join format TFRecordWriter data_dir print write SerializeToString close Example lower split insert deepcopy shuffle _random_lexicon save round max open data_dir crop_margin SerializeToString getvalue Example getroot get format TFRecordWriter size close lower float crop enumerate join int BytesIO print text min write findall find save round max finditer open data_dir crop_margin SerializeToString getvalue Example format TFRecordWriter glob size group close _is_difficult float crop MULTILINE enumerate join int BytesIO print min write join format TFRecordWriter print data_dir close join str TFRecordWriter data_dir write SerializeToString close tqdm flatten Example loadmat join int format TFRecordWriter data_dir print write SerializeToString close Example lower split save round max open data_dir crop_margin SerializeToString getvalue Example getroot find get format TFRecordWriter size close lower float crop enumerate join int BytesIO print text min write findall split list TFRecordWriter data_dir min close num_images range shuffle start_index output_path tuple max round open str transpose tolist SerializeToString getvalue Example append asarray concatenate size sqrt _draw_cross crop enumerate minimum int BytesIO Draw write maximum extend tqdm _create_samples_of_an_image loadmat split append exponential_decay constant reshape concat float32 where greater int64 any reduce_min scalar _gather_clone_loss REGULARIZATION_LOSSES get_collection add_n _sum_clones_gradients len SUMMARIES get_collection set scope create_clones UPDATE_OPS append add_n zip global_norm isinstance name IndexedSlices histogram info append values as_list set_shape concat greater shape rank _set_dim_0 expand_dims cond gather _set_dim_0 range greater _set_dim_0 cond as_list shape append enumerate name list append match filter_variables name info filter_variables name info items list NewCheckpointReader sorted isinstance warning keys convert fromarray uint8 close getvalue save StringIO fromarray uint8 BytesIO close getvalue save convert array draw_bounding_box_on_image copyto truetype line Draw text size rectangle ceil getsize fromarray array draw_bounding_boxes_on_image copyto shape range draw_bounding_box_on_image draw_keypoints_on_image convert array copyto ellipse Draw tuple size zip fromarray ones_like list getrgb reshape convert logical_or any copyto expand_dims composite array int iteritems defaultdict format tuple draw_mask_on_image_array tolist min extend draw_bounding_box_on_image_array append draw_keypoints_on_image_array range combined_static_and_dynamic_shape reshape concat combined_static_and_dynamic_shape greater unstack expand_dims cond | # ASTER: Attentional Scene Text Recognizer with Flexible Rectification ASTER is an accurate scene text recognizer with flexible rectification mechanism. The research paper can be found [here](https://ieeexplore.ieee.org/abstract/document/8395027/).  The implementation of ASTER reuses code from [Tensorflow Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). ## Update **[07/13/2019] A PyTorch [port](https://github.com/ayumiymk/aster.pytorch) has been made by [@ayumiymk](https://github.com/ayumiymk).** ## Correction (10/22/2018) We have identified a bug we accidentally made in the code that causes only part of SVT images being tested and results in higher results. The bug has been fixed in commit [a7e8613](https://github.com/bgshih/aster/commit/a7e8613d6308e5a7aacb1237dfa0286d73cef342). Below are the corrected numbers on SVT. The results are still state-of-the-art, so the conclusions are not affected. - SVT (50) ASTER: 97.4%; ASTER-A: 96.3%; ASTER-B: 96.1%; - SVT (None): ASTER: 89.5%; ASTER-A: 80.2%; ASTER-B: 81.6% | 1,545 |
bgshih/crnn | ['optical character recognition', 'scene text recognition'] | ['An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition'] | tool/create_dataset.py createDataset writeCache checkImageIsValid imdecode fromstring IMREAD_GRAYSCALE join str print len writeCache range open | Convolutional Recurrent Neural Network ====================================== This software implements the Convolutional Recurrent Neural Network (CRNN), a combination of CNN, RNN and CTC loss for image-based sequence recognition tasks, such as scene text recognition and OCR. For details, please refer to our paper http://arxiv.org/abs/1507.05717. **UPDATE Mar 14, 2017** A Docker file has been added to the project. Thanks to [@varun-suresh](https://github.com/varun-suresh). **UPDATE May 1, 2017** A PyTorch [port](https://github.com/meijieru/crnn.pytorch) has been made by [@meijieru](https://github.com/meijieru). **UPDATE Jun 19, 2017** For an end-to-end text detector+recognizer, check out the [CTPN+CRNN implementation](https://github.com/AKSHAYUBHAT/DeepVideoAnalytics/tree/master/notebooks/OCR) by [@AKSHAYUBHAT](https://github.com/AKSHAYUBHAT). Build ----- The software has only been tested on Ubuntu 14.04 (x64). CUDA-enabled GPUs are required. To build the project, first install the latest versions of [Torch7](http://torch.ch), [fblualib](https://github.com/facebook/fblualib) and LMDB. Please follow their installation instructions respectively. On Ubuntu, lmdb can be installed by ``apt-get install liblmdb-dev``. To build the project, go to ``src/`` and execute ``sh build_cpp.sh`` to build the C++ code. If successful, a file named ``libcrnn.so`` should be produced in the ``src/`` directory. | 1,546 |
bgshih/seglink | ['scene text detection', 'curved text detection'] | ['Detecting Oriented Text in Natural Images by Linking Segments'] | tool/convert_caffe_model/convert_caffemodel_to_ckpt.py seglink/unit_tests.py seglink/visualizations.py manage.py tool/convert_caffe_model/tests.py seglink/model_cnn.py seglink/evaluate.py seglink/config.py seglink/solver.py seglink/data.py tool/convert_caffe_model/dump_caffemodel_weights.py tool/create_datasets.py seglink/ops.py seglink/model.py seglink/utils.py clear build_op run_tf_program_with_json_config upload_logs start_tb test train clean_op test_preprocess input_stream train_preprocess postprocess_and_write_results_ic13 postprocess_and_write_results_ic15 evaluate SegLinkDetector SsdVgg16 load_oplib atrous_conv2d score_loss smooth_l1_loss max_pool conv2d conv_relu avg_pool _nn_variable Solver test_encode_decode_synth_data test_clip_rboxes test_encode_decode_real_data test_data_loading_and_preprocess test_max_pool_on_odd_sized_maps test_decode_combine_rboxes summarize_losses print_tensor_summary rboxes_to_polygons setup_logger log_git_version summarize_activations log_flags mkdir_if_not_exist rboxes_to_bboxes visualize_nodes visualize_rboxes visualize_segments_and_links convert_image_for_visualization visualize_combined_rboxes visualize_detection_each_layer visualize_links visualize_bboxes create_icdar2015_incidental_dataset _int64_feature _bytes_list_feature read_jpeg_check DatasetCreator_Icdar2013 DatasetCreator_Scut DatasetCreator_Td500 _int64_list_feature create_merge_multiple _bytes_feature create_synthtext_dataset _float_feature DatasetCreator DatasetCreator_Icdar2015Incidental _float_list_feature convert_caffemodel_to_ckpt dump_caffemodel_weights test_classify_image join chdir print system mkdir print join system join remove format print glob eval input len pop join list format items isinstance print chdir system abspath split len run_tf_program_with_json_config run_tf_program_with_json_config system greater_equal info build_model multiply reshape OFFSET_VARIANCE shape node_threshold link_threshold softmax cast int32 decode_segments_links SegLinkDetector combine_segments ConfigProto append enumerate minimum decode join format test_batch_size info rboxes_to_polygons astype maximum system int32 result_suffix float range minimum str join astype maximum save_image_and_lexicon bbox_scale int32 float range rboxes_to_bboxes join format load_op_library copyfile realpath dirname isinstance xavier_initializer_conv2d sqrt constant_initializer add_to_collection get_variable xavier_initializer truncated_normal_initializer get get softmax_cross_entropy_with_logits int64 cast one_hot encode_groundtruth print decode_prediction _generate_random_gt randint range len pack constant encode_groundtruth build_model train_preprocess train_record_path FctdDetector input_stream mkdir_if_not_exist batch enumerate constant clip_rboxes float32 _generate_random_rboxes shuffle_batch train_preprocess add_subplot figure input_stream mkdir_if_not_exist print astype float32 set_trace decode_combine_rboxes stdout setFormatter getLogger addHandler StreamHandler Formatter info DEBUG setLevel FileHandler items list format join info append str format check_output strip info name zero_fraction sub histogram info scalar ExponentialMovingAverage apply reduce_max zero_fraction reduce_mean shape reduce_min Print makedirs hstack hstack min max _rboxes_to_polygons uint8 asarray astype float32 IMAGE_BGR_MEAN join isinstance transpose copy add_patch imshow clf savefig Rectangle info range enumerate set_transform print add_patch rotate_around Rectangle transData expand_dims range Circle add_artist shape float range plot min shape float range clear visualize_rboxes print add_subplot convert_image_for_visualization imshow savefig Rectangle legend append range enumerate clear items list str join format visualize_rboxes print add_subplot shuffle convert_image_for_visualization det_layers imshow savefig visualize_links append range len clear join format visualize_rboxes print add_subplot convert_image_for_visualization imshow savefig range join str permutation arange list print TFRecordWriter transpose min write extend SerializeToString tqdm Example expand_dims loadmat range split join format basename print glob TFRecordWriter write shuffle SerializeToString Example exists enumerate format _create_next_sample n_samples concatenate print TFRecordWriter write shuffle SerializeToString _read_list append full range enumerate load build_model Vgg16Model float32 placeholder caffe_weights_path model_scope dump layers prototxt_path Net caffe_weights_path caffemodel_path range TEST len Vgg16Model float32 placeholder load_image_and_preprocess vgg16 | # SegLink Detecting Oriented Text in Natural Images by Linking Segments (https://arxiv.org/abs/1703.06520). ## Prerequisites The project is written in Python3 and C++ and relies on TensorFlow v1.0 or newer. We have only tested it on Ubuntu 14.04. If you are using other Linux versions, we suggest using Docker. CMake (version >= 2.8) is required to compile the C++ code. Install TensorFlow (GPU-enabled) by following the instructions on https://www.tensorflow.org/install/. The project requires no other Python packages. On Ubuntu 14.04, install the required packages by ``` sudo apt-get install cmake sudo pip install --upgrade tensorflow-gpu ``` ## Installation | 1,547 |
bharaniabhishek123/cs231n_project | ['semantic segmentation'] | ['Gleason Grading of Histology Prostate Images through Semantic Segmentation via Residual U-Net'] | train.py utils/b_dataset.py notebooks/utils.py utils/dataset.py utils/glued_image.py utils/pipeline.py models/ResUnet.py utils/DatasetImageFolder.py train_model get_args eval_model new_train_model images_to_probs convrelu ResNetUNet get_k_best_regions glue_to_one_picture select_k_best_regions display_images cluster_columns BasicDataset gv get_image_files_sorted compute_statistics plot_function draw_tree generate_patches gray2rgb BrainSegmentationDataset resize_sample outline pad_sample crop_sample normalize_volume dsc log_images get_k_best_regions glue_to_one_picture select_k_best_regions display_images BasicDataset compute_statistics generate_patches BiopsyDataClass ImageFolderWithPaths BasicDataset ProstateCancerDataset get_dataloaders tile_patriot zero_grad SGD DataLoader numpy save StepLR to double CrossEntropyLoss range state_dict SummaryWriter format replace eval_model add_histogram close eval mkdir info add_image int random_split deepcopy make_grid add_scalar print reshape add_graph BasicDataset named_parameters parameters train step len model add_images zero_grad SGD clip_grad_value_ DataLoader numpy save max StepLR to double CrossEntropyLoss range state_dict SummaryWriter format replace eval_model add_histogram close mkdir info item enumerate int random_split deepcopy make_grid criterion add_scalar backward reshape BasicDataset named_parameters parameters train step len squeeze numpy max model add_argument ArgumentParser eval train len subplots set_title plot set_xlabel f set_ylabel linspace export_graphviz show dendrogram figure correlation round linkage squareform count_nonzero sum mean compute_statistics select_k_best_regions get_k_best_regions append items list subplots suptitle imshow enumerate int list items sqrt zeros astype largest_connected_component min max nonzero pad min max shape resize percentile rescale_intensity mean std gray2rgb squeeze outline append numpy range shape empty abs max round nonzero zip shape pad reshape | # cs231n_project ## move_images.sh - copies the tiff images from gcloud to local gcloud compute scp torch-vm-vm:/project/cs231n_abhishek/CS231nProstateCancer/PyTorchCropImages.ipynb ~/Documents/myworkspace/cs231n_project gcloud compute scp --project "deep-learning-273611" --zone "us-west1-b" ### cuda quick check on vm import torch torch.cuda.current_device() torch.cuda.is_available() torch.cuda.get_device_name(0) it will give | 1,548 |
bharatsush/TextSpotting | ['optical character recognition', 'scene text recognition'] | ['An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition'] | local_utils/establish_char_dict.py tools/crnn_model/__init__.py tools/data_provider/data_provider.py tools/test_shadownet.py crnn_model/__init__.py tools/global_configuration/config.py data_provider/data_provider.py tools/train_shadownet.py tools/crnn_model/crnn_model.py global_configuration/__init__.py tools/data_provider/base_data_provider.py global_configuration/config.py tools/write_text_features.py tools/__init__.py data_provider/base_data_provider.py crnn_model/cnn_basenet.py tools/local_utils/log_utils.py local_utils/log_utils.py tools/data_provider/__init__.py tools/local_utils/data_utils.py tools/global_configuration/__init__.py tools/demo_shadownet.py tools/local_utils/establish_char_dict.py crnn_model/crnn_model.py local_utils/data_utils.py local_utils/__init__.py tools/crnn_model/cnn_basenet.py tools/local_utils/__init__.py data_provider/__init__.py CNNBaseModel ShadowNet Dataset TextDataset TextDataProvider TextFeatureWriter TextFeatureIO TextFeatureReader FeatureIO CharDictBuilder init_logger recognize init_args init_args test_shadownet train_shadownet init_args init_args write_features CNNBaseModel ShadowNet Dataset TextDataset TextDataProvider TextFeatureWriter TextFeatureIO TextFeatureReader FeatureIO CharDictBuilder init_logger join setFormatter getLogger addHandler getcwd WARNING StreamHandler Formatter dirname TimedRotatingFileHandler setLevel makedirs add_argument ArgumentParser ShadowNet ctc_beam_search_decoder TextFeatureIO astype float32 placeholder IMREAD_COLOR ConfigProto GPU_MEMORY_FRACTION Saver resize TF_ALLOW_GROWTH imread Session join int ShadowNet ctc_beam_search_decoder read_features reader ones close ceil GPU_MEMORY_FRACTION cast Saver tf_record_iterator TF_ALLOW_GROWTH ConfigProto shuffle_batch Session batch ctc_beam_search_decoder read_features localtime Saver LR_DECAY_STEPS exponential_decay Session str ShadowNet LR_DECAY_RATE ones ctc_loss get_collection merge_all strftime EPOCHS cast LEARNING_RATE TF_ALLOW_GROWTH format close FileWriter ConfigProto join time reader print Variable graph add_graph edit_distance GPU_MEMORY_FRACTION reduce_mean UPDATE_OPS int32 shuffle_batch scalar makedirs join print TextFeatureIO labels images TextDataProvider imagenames makedirs | # CRNN_Tensorflow for RNPD using India Dataset Use tensorflow to implement a Deep Neural Network for scene text recognition mainly based on the paper "An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition".You can refer to their paper for details http://arxiv.org/abs/1507.05717. Thanks for the author [Baoguang Shi](https://github.com/bgshih). This model consists of a CNN stage, RNN stage and CTC loss for scene text recognition task. ## Installation This software has only been tested on ubuntu 16.04(x64), python3.5, cuda-8.0, cudnn-6.0 with a GTX-1070 GPU. To install this software you need tensorflow 1.3.0 and other version of tensorflow has not been tested but I think it will be able to work properly in tensorflow above version 1.0. Other required package you may install them by ``` pip3 install -r requirements.txt ``` ## Test model In this repo I uploaded a model trained on a subset of the [Synth 90k](http://www.robots.ox.ac.uk/~vgg/data/text/). During data preparation process the dataset is converted into a tensorflow records which you can find in the data folder. | 1,549 |
bhartl/generative-models | ['data visualization'] | ['A high-bias, low-variance introduction to Machine Learning for physicists'] | gempy/maximum_entropy_model.py gempy/torch_/encoder.py examples/mnist/mnist_generator.py gempy/torch_/util.py gempy/let_it_scan_images.py gempy/torch_/variational_auto_encoder.py gempy/torch_/__init__.py examples/mnist/__init__.py gempy/encoder.py gempy/torch_/auto_encoder.py gempy/torch_/decoder.py gempy/torch_/mdn.py __init__.py gempy/__init__.py gempy/decoder.py nearest_neighbour_mapping MnistGenerator on_site nearest_neighbour delta_energy Decoder Encoder Lisi MaximumEntropyModel first_moment second_moment AutoEncoder Decoder ConvTDecoder ConvEncoder Encoder MixtureDiagNormalNetwork MixtureDensityNetwork ex_1d CategoricalNetwork conv_transpose_output_shape conv_output_shape get_conv_transpose_nd get_conv_nd get_layer_nd get_batch_norm_nd get_activation_function call_activation VariationalAutoEncoder empty range len append range ascontiguousarray len show subplot fit_direct gen_data DataLoader title TensorDataset MixtureDensityNetwork figure sample Tensor numpy plot_data fit tuple len tuple len getattr | # generative-models ## ```gempy``` An implementation of the *Maximum Entropy Generative* model based as discussed in the paper [*A high-bias, low-variance introduction to Machine Learning for physicists*](https://doi.org/10.1016/j.physrep.2019.03.001) ([or here on arXiv](https://arxiv.org/abs/1803.08823)) can be found in `gempy.maximum_entropy_model.py`, slides are located in `doc`. In `gempy.mnist.mnist_generator.py` we apply the Maximum Entropy Principle to generate (or fit) an Ising-based generative model on single numbers in the mnist-dataset. [`examples/mnist/mnist_generator.ipynb`](examples/mnist/mnist_generator.ipynb) provides a classifier cnn as well as `MNISTGenerator` instances which can be trained to generate numbers from `0` to `9` using the Maximum Entropy Principle, which can then be evaluated with the classifier network. Here are some example figures from the jupyter-notebook:  | 1,550 |
bhatiasiddharth/MIDAS | ['anomaly detection'] | ['Real-Time Anomaly Detection in Edge Streams', 'MIDAS: Microcluster-Based Detector of Anomalies in Edge Streams'] | util/DeleteTempFile.py util/EvaluateScore.py util/ReproduceROC.py util/PreprocessData.py darpa_original codes concat astype to_csv read_csv | # MIDAS <p> <a href="https://aaai.org/Conferences/AAAI-20/"> <img src="http://img.shields.io/badge/AAAI-2020-red.svg"> </a> <a href="https://arxiv.org/pdf/2009.08452.pdf"><img src="http://img.shields.io/badge/Paper-PDF-brightgreen.svg"></a> <a href="https://www.comp.nus.edu.sg/~sbhatia/assets/pdf/MIDAS_slides.pdf"> <img src="http://img.shields.io/badge/Slides-PDF-ff9e18.svg"> </a> <a href="https://youtu.be/Bd4PyLCHrto"> | 1,551 |
bhiziroglu/Siamese-Neural-Networks | ['one shot learning'] | ['Siamese neural networks for one-shot image recognition'] | gpu_run.py model.py cpu_run.py data.py main MNIST main Model Model fit Trainer | [](https://paperswithcode.com/sota/one-shot-learning-on-mnist?p=siamese-neural-networks-for-one-shot-image) # Siamese Neural Networks for One-shot Image Recognition An implementation of the [Siamese Neural Networks](https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf) in PyTorch, trained and tested on the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. ## Requirements * torchvision==0.5.0 * torch==1.4.0 * numpy==1.16.3 * pytorch_lightning==0.5.3.2 * Pillow==7.0.0 *requirements.txt* is provided | 1,552 |
bhpfelix/Variational-Autoencoder-PyTorch | ['style transfer'] | ['Deep Feature Consistent Variational Autoencoder'] | src/util.py src/vanila_vae.py load_pickle disp_to_term save_pickle resume_training latent_space_transition perform_latent_space_arithmatics rand_faces last_model_to_cpu VAE test loss_function load_last_model train load_batch write flush load close open dump close open append array open reconstruction_function add_ mul_ format model Variable backward print len zero_grad loss_function step cuda load_batch data format model Variable print eval save_image cuda load_batch data decode ndf ngf linspace save_image cuda list view iter append cat nc get_latent_var stack eval load_last_model zip Variable split data decode ndf ngf linspace save_image cuda list view iter append cat nc get_latent_var stack eval load_last_model zip Variable split data decode latent_variable_size randn Variable eval load_last_model save_image cuda load glob print load_state_dict max format test load_last_model save train epochs range state_dict cpu load_last_model save state_dict | # Variational Autoencoder for face image generation in PyTorch Variational Autoencoder for face image generation implemented with PyTorch, Trained over a combination of CelebA + FaceScrub + JAFFE datasets. Based on Deep Feature Consistent Variational Autoencoder (https://arxiv.org/abs/1610.00291 | https://github.com/houxianxu/DFC-VAE) TODO: Add DFC-VAE implementation Pretrained model available at https://drive.google.com/open?id=0B4y-iigc5IzcTlJfYlJyaF9ndlU ## Results Original Faces vs. Reconstructed Faces: <div> <img src='imgs/Epoch_28_data.jpg', width="48%"> <img src='imgs/Epoch_28_recon.jpg', width="48%"> | 1,553 |
biboamy/FBA-Fall19 | ['time series'] | ['Score-informed Networks for Music Performance Assessment'] | data_processing/data_check.py JointEmbedNet/lib.py SIConvNet/config.py JointEmbedNet/train.py JointEmbedNet/train_utils.py SIConvNet/eval.py data_processing/data_process.py data_processing/data_matrix_resize.py DistMatNet/train_utils.py SIConvNet/model.py DistMatNet/lib.py SIConvNet/train.py SIConvNet/trainer.py DistMatNet/config.py DistMatNet/model.py JointEmbedNet/eval.py SIConvNet/lib.py data_processing/data_split.py DistMatNet/eval.py DistMatNet/train.py JointEmbedNet/model.py JointEmbedNet/config.py data_processing/data_matrix.py data_processing/DataUtils.py DataUtils check_failed_perfs_and_remove check_num_by_year simpleDTW computeDistanceMatrixAndAlignment compute_mat_aln_for_all load_data combine_h5 combine_alignment createMatrixDill generate_newdata_newsplit generate_instrument_split main evaluate_classification evaluate_model load_test_data load_data loss_func Data2Torch block ConvNet_Fixed DistMatNet main Trainer main evaluate_classification evaluate_model load_test_data get_weight test_collate load_data normalize_pc_and_sc my_collate distance_loss Data2Torch check_missing_alignedmidi PCConvLstmNet PCConvNet JointEmbedNet main Trainer eval_main evaluate_classification evaluate_model load_test_data normalize_midi compute_kld_loss normalize_pitch test_collate load_data normalize_pc_and_sc my_collate Data2Torch mse_loss SIConvNet PCPerformanceVAE PCConvNet train_main Trainer load print open arange print ones astype load_data append len load format open array len int arange zeros_like argmin min shape reverse append range simpleDTW arange zeros_like abs min astype float16 mean decimate log2 linspace zeros argmax array len str time format arange print load_data len load str format write_incre_h5 arange print File close shape pad create_dataset open array len load str format arange print len array open print astype float32 resize append len seed load int arange print len shuffle array open load format print append array open print squeeze astype r2_score accuracy_score pearsonr model extend unsqueeze numpy enumerate load_test_data str format append print dict DataLoader eval load_data DistMatNet load_state_dict is_available Data2Torch sum cuda range evaluate_model len load format open mse MSELoss seed manual_seed_all fit today Trainer device manual_seed to makedirs view reshape eval join dirname JointEmbedNet load format arange print append array open log2 array random_chunk window_chunk non_overlap overlap reshape cos loss_func MSELoss CosineSimilarity astype float32 mean from_numpy sum cat load_test_data join append print PCConvNet dict DataLoader PCPerformanceVAE load_data load_state_dict dirname is_available Data2Torch SIConvNet cuda range evaluate_model makedirs loss_func MSELoss mean abs kl_divergence seed manual_seed_all print fit PCConvNet Trainer DataLoader PCPerformanceVAE load_data manual_seed is_available Data2Torch SIConvNet cuda range makedirs | # Score-Informed Networks for Music Performance Assessment This is the repository for paper **Score-Informed Networks for Music Performance Assessment**. Paper, poster and video presentation: https://program.ismir2020.net/poster_6-18.html The audio recordings are provided by [Florida Bandmasters Association](https://fba.flmusiced.org/). | 1,554 |
big-bombom/deepfake | ['face swapping'] | ['DeepFaceLab: Integrated, flexible and extensible face-swapping framework'] | models/Model_XSeg/Model.py mainscripts/VideoEd.py merger/MergerScreen/MergerScreen.py mainscripts/dev_misc.py merger/InteractiveMergerSubprocessor.py samplelib/SampleGeneratorImageTemporal.py core/leras/models/__init__.py samplelib/Sample.py core/joblib/__init__.py core/imagelib/estimate_sharpness.py core/joblib/MPFunc.py core/leras/layers/DenseNorm.py core/leras/nn.py samplelib/SampleProcessor.py core/leras/archis/__init__.py core/leras/layers/Conv2D.py core/stdex.py core/imagelib/text.py core/leras/models/PatchDiscriminator.py core/leras/__init__.py models/Model_XSeg/__init__.py core/leras/layers/DepthwiseConv2D.py core/imagelib/equalize_and_stack_square.py samplelib/__init__.py core/imagelib/sd/draw.py core/randomex.py core/joblib/MPClassFuncOnDemand.py core/leras/optimizers/__init__.py core/leras/initializers/__init__.py mainscripts/Extractor.py mainscripts/Sorter.py core/mathlib/__init__.py core/leras/optimizers/OptimizerBase.py core/imagelib/draw.py samplelib/SampleGeneratorBase.py core/leras/layers/BatchNorm2D.py core/leras/layers/ScaleAdd.py XSegEditor/QCursorDB.py models/__init__.py core/joblib/SubprocessGenerator.py DFLIMG/DFLIMG.py core/leras/layers/__init__.py models/ModelBase.py facelib/FaceEnhancer.py core/joblib/ThisThreadGenerator.py XSegEditor/QIconDB.py core/imagelib/blursharpen.py merger/MergeMasked.py core/cv2ex.py models/Model_Quick96/__init__.py main.py core/leras/archis/ArchiBase.py localization/localization.py facelib/LandmarksProcessor.py core/leras/layers/Dense.py models/Model_SAEHD/__init__.py mainscripts/Merger.py mainscripts/XSegUtil.py core/imagelib/filters.py XSegEditor/QStringDB.py core/leras/initializers/CA.py core/imagelib/__init__.py core/leras/models/ModelBase.py samplelib/SampleGeneratorFaceTemporal.py core/leras/layers/InstanceNorm2D.py facelib/S3FDExtractor.py samplelib/SampleLoader.py samplelib/SampleGeneratorFaceXSeg.py core/qtex/qtex.py DFLIMG/__init__.py core/leras/optimizers/RMSprop.py core/structex.py core/leras/layers/BlurPool.py facelib/__init__.py core/leras/archis/DeepFakeArchi.py core/imagelib/SegIEPolys.py mainscripts/FacesetEnhancer.py merger/FrameInfo.py models/Model_SAEHD/Model.py core/mplib/__init__.py samplelib/SampleGeneratorFacePerson.py merger/MergerScreen/__init__.py core/imagelib/common.py core/imagelib/sd/__init__.py core/leras/layers/AdaIN.py core/leras/layers/LayerBase.py facelib/FANExtractor.py core/leras/models/XSeg.py samplelib/SampleGeneratorFace.py merger/MergeAvatar.py core/pathex.py core/leras/layers/Conv2DTranspose.py core/qtex/QXMainWindow.py merger/__init__.py core/imagelib/morph.py core/joblib/SubprocessorBase.py mainscripts/Trainer.py samplelib/PackedFaceset.py core/mathlib/umeyama.py mainscripts/Util.py core/interact/__init__.py core/leras/layers/FRNorm2D.py core/leras/layers/Saveable.py core/leras/layers/TLU.py facelib/XSegNet.py core/mplib/MPSharedList.py core/leras/models/CodeDiscriminator.py facelib/FaceType.py XSegEditor/XSegEditor.py merger/MergerConfig.py core/leras/ops/__init__.py core/interact/interact.py core/leras/device.py core/imagelib/sd/calc.py core/imagelib/reduce_colors.py samplelib/SampleGeneratorImage.py localization/__init__.py DFLIMG/DFLJPG.py core/qtex/__init__.py core/qtex/QSubprocessor.py core/qtex/QXIconButton.py samplelib/SampleGeneratorFaceCelebAMaskHQ.py core/imagelib/warp.py core/osex.py core/imagelib/color_transfer.py models/Model_Quick96/Model.py process_dev_test process_merge process_videoed_cut_video process_train process_faceset_enhancer process_xsegeditor process_xsegapply process_xsegremove process_xsegremovelabels process_videoed_video_from_sequence process_xsegfetch process_util process_extract fixPathAction process_videoed_extract_video process_sort bad_args process_videoed_denoise_image_sequence cv2_imwrite cv2_imread set_process_dpi_aware get_screen_size set_process_lowest_prio get_image_paths move_all_files write_bytes_safe get_first_file_by_stem get_image_unique_filestem_paths get_all_dir_names get_file_paths delete_all_files scantree get_paths get_all_dir_names_startswith random_normal suppress_stdout_stderr struct_unpack LinearMotionBlur blursharpen _scale_array color_transfer color_transfer_idt color_transfer_mkl reinhard_color_transfer lab_image_stats linear_color_transfer channel_hist_match color_transfer_mix color_transfer_sot color_hist_match overlay_alpha_image cut_odd_image normalize_channels draw_polygon draw_rect equalize_and_stack_square compute _calculate_sharpness_metric marziliano_method get_block_contrast _simple_thinning estimate_sharpness is_edge_block sobel apply_random_motion_blur apply_random_rgb_levels apply_random_hsv_shift apply_random_bilinear_resize apply_random_gaussian_blur morphTriangle morph_by_points applyAffineTransform reduce_colors SegIEPolys SegIEPolyType SegIEPoly get_text_image draw_text_lines draw_text _get_pil_font get_draw_text_lines warp_by_params gen_warp_params dist_to_edges random_circle_faded circle_faded InteractBase InteractColab InteractDesktop MPClassFuncOnDemand MPFunc SubprocessGenerator Subprocessor ThisThreadGenerator Devices Device nn ArchiBase DeepFakeArchi CAInitializerSubprocessor initializers AdaIN BatchNorm2D BlurPool Conv2D Conv2DTranspose Dense DenseNorm DepthwiseConv2D FRNorm2D InstanceNorm2D LayerBase Saveable ScaleAdd TLU CodeDiscriminator ModelBase PatchDiscriminator XSeg dssim concat average_gv_list resize2d_bilinear flatten rgb_to_lab space_to_depth tf_gradients random_binomial style_loss gelu init_weights tf_get_value upsample2d reshape_4D batch_set_value max_pool average_tensor_list gaussian_blur depth_to_space OptimizerBase RMSprop umeyama get_power_of_two rotationMatrixToEulerAngles polygon_area ArrayFillerSubprocessor MPSharedList IndexHost Index2DHost ListHost DictHostCli DictHost QSubprocessor QDarkPalette QActionEx QSize_to_np QImage_from_np QImage_to_np QPixmap_from_np QPoint_to_np QPoint_from_np QXIconButton QXMainWindow DFLIMG DFLJPG FaceEnhancer FaceType FANExtractor blur_image_hull_mask mirror_landmarks get_face_struct_mask estimate_pitch_yaw_roll convert_98_to_68 expand_eyebrows get_rect_from_landmarks get_transform_mat draw_rect_landmarks get_cmask transform_points estimate_averaged_yaw calc_face_pitch alpha_to_color get_image_eye_mask draw_landmarks get_image_hull_mask S3FDExtractor XSegNet dev_test_68 dev_test1 dev_resave_pngs extract_vggface2_dataset extract_umd_csv dev_segmented_trash process_folder FacesetEnhancerSubprocessor extract_video video_from_sequence denoise_image_sequence cut_video remove_xseg remove_xseg_labels apply_xseg fetch_xseg FrameInfo InteractiveMergerSubprocessor MergeFaceAvatar process_frame_info MergeMasked MergeMaskedFace MergerConfigMasked MergerConfigFaceAvatar MergerConfig ScreenManager ScreenAssets Screen ModelBase import_model QModel SAEHDModel XSegModel PackedFaceset Sample SampleType SampleGeneratorBase SampleGeneratorFace SampleGeneratorFaceCelebAMaskHQ MaskType SampleGeneratorFacePerson SampleGeneratorFaceTemporal SampleGeneratorFaceXSeg SegmentedSampleFilterSubprocessor SampleGeneratorImage SampleGeneratorImageTemporal FaceSamplesLoaderSubprocessor SampleLoader SampleProcessor QCursorDB QIconDB QStringDB ImagePreviewSequenceBar QUIConfig QCanvasOperator LoaderQSubprocessor CanvasConfig OpMode QCanvas DragType ViewLock ColorScheme QCanvasControlsLeftBar start QCanvasControlsRightBar MainWindow PTEditMode main set_process_lowest_prio main set_process_lowest_prio unpack_faceset pack save_faceset_metadata log_info restore_faceset_metadata_folder pack_faceset save_faceset_metadata_folder restore_faceset_metadata Path input_dir unpack recover_original_aligned_filename set_process_lowest_prio add_landmarks_debug_images main set_process_lowest_prio main set_process_lowest_prio output_ext fps extract_video output_dir input_file set_process_lowest_prio audio_track_id from_time bitrate to_time cut_video input_file set_process_lowest_prio factor denoise_image_sequence set_process_lowest_prio input_dir video_from_sequence set_process_lowest_prio Path set_process_lowest_prio input_dir process_folder dev_test set_process_lowest_prio input_dir start Path set_process_lowest_prio input_dir model_dir apply_xseg Path input_dir set_process_lowest_prio Path remove_xseg set_process_lowest_prio input_dir remove_xseg_labels Path set_process_lowest_prio input_dir Path fetch_xseg set_process_lowest_prio input_dir print_help exit loader_func asarray bytearray imencode suffix nice SetPriorityClass HANDLE GetCurrentProcess SetProcessDPIAware user32 write_bytes parent name unlink rename exists is_dir scandir str list scandir any Path scantree exists append remove get_image_paths name stem set add verbose_print_func Path exists Path exists Path exists str list lower scandir Path startswith append exists str sorted list path lower scandir Path exists name Path rename get_file_paths unlink Path get_file_paths normal empty prod range calcsize warpAffine ones getRotationMatrix2D zeros sum medianBlur addWeighted ones zeros GaussianBlur max dtype reshape astype copy argsort shape bilateralFilter fill empty range eps T clip reshape eig dot shape sqrt cov mean diag T reshape min astype float32 empty_like solve dot shape histogram interp max range _scale_array uint8 astype float32 merge lab_image_stats COLOR_LAB2BGR cvtColor split T reshape transpose mean dot eigh eye cholesky split min max float64 astype shape unique interp ravel dtype astype shape channel_hist_match range uint8 astype float32 COLOR_BGR2LAB color_transfer_sot COLOR_LAB2BGR cvtColor uint8 color_transfer_idt color_transfer_mkl astype float32 reinhard_color_transfer linear_color_transfer color_transfer_sot clip shape repeat len shape shape range tuple line range len draw_polygon concatenate shape resize expand_dims max enumerate T convolve square mean sqrt array shape zeros float64 marziliano_method astype canny sobel gradient atan2 shape any zeros round range int exp slice get_block_contrast shape flipud round zeros is_edge_block rot90 range cvtColor COLOR_BGR2GRAY rand random clip array COLOR_HSV2BGR random merge COLOR_BGR2HSV randint clip cvtColor split LinearMotionBlur randint random randint GaussianBlur random int rand random shape resize INTER_LINEAR float32 getAffineTransform float32 fillConvexPoly shape boundingRect int32 applyAffineTransform zeros expand_dims array shape morphTriangle zeros simplices fromarray uint8 convert astype COLOR_RGB2BGR array cvtColor truetype asarray Draw get_default_ttf_font_name concatenate text new _get_pil_font shape clip draw_text range len draw_text_lines zeros shape T random astype copy float32 getRotationMatrix2D dict uniform linspace random_normal warpAffine remap norm clip einsum concatenate norm reshape empty abs clip max random randint initializer inputs append batch_set_value run gradients expand_dims __enter__ __exit__ enumerate reduce_mean __enter__ __exit__ concat pow tanh sqrt pi as_list reshape tile transpose value resize transpose reshape transpose randint float32 pad make_kernel tile depthwise_conv2d gaussian_blur dtype constant arange reshape float32 square reduce_mean reducer cast softmax tile as_list reshape transpose as_list reshape transpose constant reshape multiply matmul cast svd T ones matrix_rank mean dot eye sum diag sqrt atan2 shape Format_Grayscale8 Format_BGR888 Format_ARGB32 height reshape convertToFormat width constBits setsize range squeeze invertAffineTransform shape transform expand_dims get norm getAffineTransform polygon_area astype float32 transform_points sqrt estimate_averaged_yaw array transform_points FULL_NO_ALIGN get_transform_mat float32 array copy concatenate expand_eyebrows fillConvexPoly convexHull zeros int getStructuringElement astype fillConvexPoly MORPH_ELLIPSE convexHull dilate zeros GaussianBlur shape zeros concatenate process copy blend alpha_to_color zeros get_image_hull_mask gdf max clip int blur getStructuringElement min erode argwhere MORPH_ELLIPSE expand_dims copy draw_landmarks zeros expand_eyebrows concatenate polylines tuple shape get_image_hull_mask array circle get_transform_mat draw_rect transform_points draw_polygon draw_landmarks array array rotationMatrixToEulerAngles concatenate astype float32 pi solvePnP zeros array clip get pop get_image_paths parent log_info name stem progress_bar_generator get_all_dir_names Path mkdir run fromString split cv2_imread Path normalize_channels exists input_bool str log_info name stem append get_image_paths get_rect_from_landmarks unlink mkdir parent cv2_imwrite progress_bar_generator read_text split get str get_image_paths parent log_info name len unlink Path mkdir split log_err run range exists fromString input_bool get_image_paths progress_bar_generator get_all_dir_names Path x get_image_paths cv2_imwrite progress_bar_generator cv2_imread Path get_image_paths parent name stem rename Path mkdir append input_bool join get_image_paths log_info parent name copy unlink rmtree mkdir run update str get_image_paths parent input_str stem output get_first_file_by_stem unlink input_int mkdir Path log_err input run str suffix parent input_str stem overwrite_output input_int log_err Path input max run update str suffix parent progress_bar_generator output input_int rename log_err Path run clip enumerate suffix input_str wait input_int Path max input_bool str stem input update run_async get_image_paths close mkdir parent overwrite_output get_first_file_by_stem log_err probe load extract initialize get_image_paths log_info set_xseg_mask progress_bar_generator astype float32 get_resolution ask_choose_device shape XSegNet resize save load str get_image_paths log_info parent name has_polys progress_bar_generator copy get_seg_ie_polys mkdir load get_image_paths log_info set_xseg_mask input_str progress_bar_generator has_xseg_mask save load get_image_paths log_info input_str has_seg_ie_polys progress_bar_generator save set_seg_ie_polys warpAffine get_transform_mat astype float32 cv2_imread normalize_channels filename clip sharpen_func sharpen_mode concatenate predictor_func add_source_image process_frame_info temporal_face_count append range sharpen_amount predictor_func color_transfer_mkl motion_power bicubic_degrade_power motion_blur_power linear_color_transfer color_transfer_mix boundingRect resize reduce_colors max clip face_enhancer_func hist_match_threshold medianBlur super_resolution_power WARP_INVERSE_MAP ones LinearMotionBlur shape pad blur_mask_modifier image_denoise_power masked_hist_match blursharpen range color_hist_match BORDER_TRANSPARENT warpAffine sharpen_mode xseg_256_extract_func seamlessClone color_transfer_idt astype copy reinhard_color_transfer empty_like motion_deg INTER_CUBIC MORPH_ELLIPSE color_transfer_sot dilate GaussianBlur get_image_hull_mask NORMAL_CLONE uint8 int erode_mask_modifier getStructuringElement get_transform_mat float32 nan_to_num erode argwhere blursharpen_amount color_degrade_power landmarks_list concatenate astype float32 cv2_imread shape normalize_channels MergeMaskedFace filepath clip enumerate str parent cv2_imread locals __import__ globals dict setApplicationName setPalette QDarkPalette Path show str initialize log_info setWindowIcon addApplicationFont AA_EnableHighDpiScaling setStyle setFont gettempdir setAttribute QApplication path_contains app_icon MainWindow exec_ parent QFont raise_ AA_UseHighDpiPixmaps | <table align="center" border="0"><tr><td align="center" width="9999"> # DeepFaceLab <a href="https://arxiv.org/abs/2005.05535"> <img src="https://static.arxiv.org/static/browse/0.3.0/images/icons/favicon.ico" width=14></img> https://arxiv.org/abs/2005.05535</a> ### the leading software for creating deepfakes <img src="doc/DFL_welcome.png" align="center"> </td></tr> <tr><td align="center" width="9999"> <p align="center"> | 1,555 |
bill317996/Singer-identification-in-artist20 | ['data augmentation'] | ['Addressing the confounds of accompaniments in singer identification'] | extract_melody.py utility.py train_CRNN_melody.py extract_fea.py predict_on_audio.py train_CRNN.py model.py wave2spec create_dataset_parellel get_melody_contour extract_melody CRNN2D_elu CRNN2D_elu2 parser predict wave2spec encode_labels create_spectrogram_plots load_dataset_album_split_da simple_encoding load_dataset create_dataset_non_vocal_parellel predict_artist slice_songs load_dataset_song_split create_dataset_mix_vocal_parellel get_vocal_idx plot_history plot_confusion_matrix wave2spec_mix_vocal slice_songs_dam slice_songs_da wave2spec_non_vocal visualize_spectrogram create_dataset_parellel load_dataset_album_split_dam load_dataset_album_split create_dataset create_dataset_mix_vocal load join amplitude_to_db tqdm melspectrogram join len close map tqdm split append Pool listdir array makedirs range astype mel_frequencies len load int tqdm save get_melody_contour predict load int time Classifier print amplitude_to_db CRNN2D_elu2 eval load_state_dict melspectrogram float cuda range append add_argument ArgumentParser load logamplitude tight_layout colorbar title figure melspectrogram specshow load join print logamplitude melspectrogram listdir makedirs load join print logamplitude melspectrogram listdir makedirs join len close map tqdm split append Pool listdir array makedirs join len close map tqdm split append Pool listdir array makedirs logamplitude load join logamplitude tqdm melspectrogram load join list logamplitude set_trace set tqdm melspectrogram range abs amplitude_to_db mean sum std remove RandomState choice append listdir seed join pop load RandomState shuffle extend choice append listdir split seed join pop RandomState shuffle extend choice append listdir split seed join pop RandomState shuffle extend choice append listdir split train_test_split load_dataset append int range enumerate append int range enumerate append int range enumerate load join subplots print axes logamplitude tight_layout choice title melspectrogram specshow listdir list format arange product yticks text xlabel astype tight_layout colorbar ylabel imshow title xticks max range len show plot xlabel ylabel title legend print classification_report shuffle confusion_matrix array plot_confusion_matrix unique append argmax max predict reshape toarray OneHotEncoder LabelEncoder transform fit_transform LabelEncoder | # Singer-identification-in-artist20 The source code of "Addressing the confounds of accompaniments in singer identification" - arxiv: https://arxiv.org/abs/2002.06817 ### Dependencies Requires following packages: - python 3.6 - pytorch 1.3 - crepe 0.0.10 - librosa 0.7.1 - dill 0.3.1.1 | 1,556 |
billsun9/automated-style-transfer | ['style transfer'] | ['Exploring the structure of a real-time, arbitrary neural artistic stylization network'] | utils.py style_transfer.py flask_app.py upload_file results load_model crop_center save_n load_image style_transfer show_predictions purge allowed_file purge join purge secure_filename save filename style_transfer shape min max crop_to_bounding_box resize stack crop_center subplot axis close GridSpec cla imshow title clf savefig figure range len load str load_model model load_image save_n randint avg_pool remove listdir append join listdir | # automated-style-transfer This web application provides a means of automatically transferring the artistic style of various famous pieces of artwork to an arbitrary user image. It was built using Python, HTML, CSS, and Javascript. The neural transfer model utlized in the web app was based on the model code in magenta (https://github.com/magenta/magenta/tree/master/magenta/models/arbitrary_image_stylization) and the publication: Exploring the structure of a real-time, arbitrary neural artistic stylization network. (https://arxiv.org/abs/1705.06830) Golnaz Ghiasi, Honglak Lee, Manjunath Kudlur, Vincent Dumoulin, Jonathon Shlens, Proceedings of the British Machine Vision Conference (BMVC), 2017. Libraries used include Keras, Tensorflow, Matplotlib, and Numpy. | 1,557 |
billy-inn/HRERE | ['link prediction', 'relation extraction'] | ['Connecting Language and Knowledge with Heterogeneous Representations for Neural Relation Extraction'] | get_embeddings.py real_hrere.py utils/embedding_utils.py complex_hrere.py utils/logging_utils.py final_plot.py eval.py utils/data_utils.py utils/pkl_utils.py model_param_space.py create_kg.py preprocess.py data/prepare_data.py config.py task.py model.py bilstm.py BiLSTM ComplexHRERE transform main main parse_args main plot main parse_args complex real Model ModelParamSpace word_preprocess search group create_test_set preprocess main parse_args find_pos RealHRERE TaskOptimizer Task main parse_args AttrDict download_file_from_google_drive save_response_content get_confirm_token flatten batch_iter load_dict_from_txt Embedding _get_logger _load _save arange concat RAW_TRAIN_DATA open E2ID list map apply intersection train_test_split a1 close set enumerate RAW_TEST_DATA print makedirs write to_csv KG_PATH KB zeros a2 read_csv len add_option runs _get_logger evaluate Task LOG_DIR prefix model_name isoformat load join PLOT_FIG_DIR xlabel rc grid ylabel precision_recall_curve ylim clf savefig PLOT_DATA_DIR legend xlim range PLOT_OUT_DIR len plot load E2ID load_dict_from_txt KG_PATH uniform save RELATION_EMBEDDING R2ID ENTITY_EMBEDDING load E2ID RELATION_EMBEDDING2 load_dict_from_txt RELATION_EMBEDDING1 KG_PATH uniform ENTITY_EMBEDDING1 save ENTITY_EMBEDDING2 R2ID complex real len any maketrans translate range len word_preprocess search extend split range len E2ID len1 e2 load_dict_from_txt len2 astype map to_csv R2ID e1 read_csv s groupby reset_index print append range array read_csv _save print read_csv append BAG_SIZE array range _save group CLEAN_TEST_DATA create_test_set preprocess CLEAN_TRAIN_DATA GROUPED_TEST_DATA GROUPED_TRAIN_DATA sample max_evals TaskOptimizer run get get_confirm_token save_response_content Session items list startswith seed int permutation arange min array range len append array range len RotatingFileHandler setFormatter getLogger addHandler Formatter setLevel | # HRERE Connecting Language and Knowledge with Heterogeneous Representations for Neural Relation Extraction Paper Published in NAACL 2019: [HRERE](https://arxiv.org/abs/1903.10126) ### Prerequisites - tensorflow >= r1.2 - hyperopt - gensim - sklearn ### Dataset To download the dataset used: | 1,558 |
biomedia-mira/stochastic_segmentation_networks | ['semantic segmentation'] | ['Stochastic Segmentation Networks: Modelling Spatially Correlated Aleatoric Uncertainty'] | evaluation/utils/stats.py ssn/trainer/logger.py evaluation/running_metrics/running_probability_distribution.py ssn/models/__init__.py evaluation/running_metrics/samplers.py evaluation/visualisation/report.py ssn/nifti/savers.py ssn/trainer/inference.py evaluation/evaluator.py ssn/trainer/distributions.py ssn/models/stochastic_deepmedic.py ssn/trainer/metrics.py evaluation/running_metrics/running_metric.py ssn/inference.py evaluation/running_metrics/running_confusion_matrix.py ssn/models/deepmedic.py ssn/models/base.py ssn/nifti/augmention.py ssn/nifti/transformation.py ssn/nifti/datasets.py ssn/trainer/hooks.py ssn/read_config.py ssn/nifti/patch_samplers.py evaluation/generate_samples.py ssn/train.py ssn/trainer/model_trainer.py evaluation/evaluate.py evaluation/metrics/overlap_metrics.py evaluation/preprocessing.py evaluation/visualisation/slice_visualizer.py ssn/toy_problem.py ssn/trainer/losses.py evaluation/metrics/distribution_statistics.py evaluation/running_metrics/running_sample_generator.py get_class_weigthed_samplers get_samplers_deterministic evaluate make_sample_thumbs get_samplers_stochastic save_image Evaluator ImageReader get_class_weigthed_samplers get_samplers_deterministic evaluate make_sample_thumbs get_samplers_stochastic fix_segmentation_labels get_brain_mask z_score_normalisation DistributionStatistics add_volume_metrics compress_running_cm merge_cm_classes calc_relative_error update_dataframe_with_volume_metrics compute_voxel_volume_ml OverlapMetrics cm_metrics_to_dataframe RunningConfusionMatrix RunningMetric calc_generalised_energy_distance calc_sample_diversity iou RunningDistributionStatistics distance calc_marginal_entropy calc_loglikelihood RunningSampleGenerator LowRankMultivariateNormalRandomSampler LowRankMultivariateNormalTemperatureScaledRandomSampler tensors_to_sitk_images LowRankMultivariateNormalClassWeightedRangeSampler cast_to_tensor CategoricalSampler LowRankMultivariateNormalWeightedSampler Sampler CategoricalDeterministicSampler calculate_roc_auc_standard_error np_mode clopper_pearson intra_class_coefficient iqr nanstderr iqr_v2 compute_metrics_from_cm pearsonr_correlation report latex_formatted_report ArrayStatistics MostLoadedSliceVisualizerPerClass centered_resize DummySliceVisualiser SliceVisualizer MostLoadedSliceVisualiser run_inference DiagonalModel show_images calculate_loglikelihood show_cov_matrix get_slide_bar_binary_target run_toy_problem get_on_off_ideal_solution expand_target evaluate_distribution LowRankModel get_on_off_binary_target LowRankMultivariateNormal get_prob run_ensemble set_device run_job set_random_seed PreActBlock crop_center SqueezeAndExciteBlock BiomedicalBlock UpSample calculate_convolution_output_size UpConv DownSample BiomedicalModule TrainableTanhNormalization calculate_convolution_input_size DeepMedic Path StochasticDeepMedic RandomPatchFlip RandomHistogramDeformation RandomElasticDeformationSimard2003 RandomGammaCorrection RandomElasticDeformationCoarse RandomPatchRotation RandomElasticDeformation RandomElasticDeformationCoarsePerlinNoise RandomAugmentation NiftiDataset PatchWiseNiftiDataset FullImageToOverlappingPatchesNiftiDataset worker_init_fn BoundingBoxCenteredPatchSampler ForegroundBackgroundPatchSampler ConditionalPatchSampler get_patch_and_padding PatchSampler StochasticPatchSampler RandomPatchSampler reconstruct_image save_image get_num_maps NiftiPatchSaver IntensityWindowNormalization Transformation MaskImageUsingSamplingMask ReshapedDistribution ModelSaverHook Evaluator Hook NaNLoss TrainingEvaluator ValidationEvaluator ModelInferenceEnsemble ModelInference get_logger CrossEntropyLoss StochasticSegmentationNetworkLossMCIntegral RunningConfusionMatrix calc_accuracy calc_recall report_mean_and_std to_np_cpu SegmentationImageThumbs SegmentationMetrics calc_f1_score Loss Metric calc_precision report_scalar ClassificationMetrics TrackTensor MultiChannelTensorDistribution detach_state ModelTrainer predict_multi_target predict_regression predict_exclusive update range update items get_class_weigthed_samplers list evaluator get_samplers_deterministic make_sample_thumbs join min Evaluator dataframes report add_merged_dataframe get_samplers_stochastic dirname DistributionStatistics keys OverlapMetrics replace join dirname WriteImage makedirs uint8 CopyInformation astype GetImageFromArray sitkUInt8 Cast percentile mean logical_and std GetImageFromArray CopyInformation GetArrayFromImage abs abs abs calc_relative_error list set zeros sum range DataFrame sum iou reshape astype mean distance eye bool dtype type log einsum list asarray all concatenate cumsum sort slice insert ndim roll ravel shape unique argmax array isnan sum logical_not nanpercentile nanpercentile reshape einsum ppf sqrt tanh ppf concatenate size arctanh mean sqrt power sum len tanh ppf concatenate size arctanh sqrt pearsonr items list print transpose keys print ArrayStatistics tuple pad print set_device len get_test_loader NiftiPatchSaver split get_model makedirs show minorticks_on colorbar imshow numpy abs max subplots set_xticklabels set_yticklabels imshow savefig numpy range len rsample logsumexp sigmoid mean expand_target get_on_off_binary_target sum log rsample set_yticklabels add_subplot abs max round append_axes make_axes_locatable colorbar imshow savefig add_gridspec range update set_xticklabels set_ticks_position float enumerate print minorticks_on sigmoid figure numpy array len sigmoid zeros int64 type zeros eye list int64 zeros type range enumerate rsample zero_grad log get_dist Adam logsumexp LowRankModel sum range DiagonalModel get_target_fn mean expand_target get_prob deepcopy backward print isnan step train_model join get_train_loader get_loss ModelTrainer ones print set_device get_valid_loader get_training_hooks copyfile set_random_seed get_test_loader get_model get_optimizer makedirs seed str manual_seed print device join str exists print run_job tuple tuple tuple seed randint append zip enumerate slice CopyInformation GetImageFromArray get_num_maps tuple transpose ndim get_patch_and_padding shape zip zeros range len join setFormatter replace getLogger addHandler StreamHandler Formatter dirname DEBUG setLevel FileHandler makedirs max softmax sigmoid round items list requires_grad is_tensor detach | ## Stochastic Segmentation Networks [](https://arxiv.org/abs/2006.06015) [](https://colab.research.google.com/github/MiguelMonteiro/stochastic_segmentation_networks_demo/blob/master/ssn_demo.ipynb)  [\[Paper\]](https://arxiv.org/abs/2006.06015) [\[Interactive Demo\]](https://colab.research.google.com/github/MiguelMonteiro/stochastic_segmentation_networks_demo/blob/master/ssn_demo.ipynb) This repository contains the code for the paper: > Monteiro, M., Folgoc, L., Castro, D.C., Pawlowski, N., Marques, B., Kamnitsas, K., van der Wilk, M. and Glocker, B., _Stochastic Segmentation Networks: Modelling Spatially Correlated Aleatoric Uncertainty_, 2020 [[arXiv]](https://arxiv.org/abs/2006.06015) If you use our code in your publications, please consider citing our paper: ``` | 1,559 |
bionlproc/BERT-CRel-Embeddings | ['word embeddings'] | ['Improved Biomedical Word Embeddings in the Transformer Era'] | train.py mesh_tree.py BMET/eval.py gen_corpus_static.py gen_finetune_exs.py BMET/config.py BMET/data.py BMET/vocab.py tools/uts_api_client.py BMET/utils.py BMET/model.py eval_intrinsic_ft.py affix_meshes proc_mesh_desc add_extra_documents mp_proc_docs parse_mesh_code proc_pubtator read_eval_terms gen_mt_examples gen_pm_examples mp_read_pm_file set_defaults train validate batchify MeshPairDataset EmbEvaluator MeshRelClassifier sequence_mask MeSHTree Node TrainingLogs generate_examples MeSH WvModel UtsClient append int parse_mesh_code split join affix_meshes group match append compile join partial print cpu_count close tqdm Pool children format print inorder_traversal len print set list parse punctuation text words set lower append iterfind open join sorted print glob D_PM cpu_count apply_async close Counter Pool children list punctuation items print words inorder_traversal ui set tqdm lower sa append find_mesh_family pa enumerate seed RSEED manual_seed validate save_model clip_grad_norm_ zero_grad is_emb_frozen DataLoader unfreeze_embs report_trn mdl F_OUT update eval next_annealing_stage item stage backward parameters LR_FT step MIN_LR_FT DataLoader partial cuda tensor numel isinstance load update combinations format print len shuffle Counter tqdm any choices append trange open parent | **BERT-CRel** is a transformer model for fine-tuning biomedical word embeddings that are jointly learned along with concept embeddings using a pre-training phase with fastText and a fine-tuning phase with a transformer setup. The goal is to provide high quality pre-trained biomedical embeddings that can be used in any downstream task by the research community. This repository contains the code used to implement the BERT-CRel methods and generate the embeddings. The corpus used for BERT-CRel contains biomedical citations from PubMed and the concepts are from the Medical Subject Headings (MeSH codes) terminology used to index citations. All our fine-tuned embeddings can be obtained from Zenodo: [](https://doi.org/10.5281/zenodo.4383195) **BERT-CRel-all** This contains word embeddings and all the MeSH descriptors and a subset of supplementary concepts (each of which meet a frequency threshold). Vocabulary is divided into three sections: (1) BERT special tokens (2) MeSH codes (3) English words in descending frequency order. (vocabulary size is 333,301) **BERT-CRel-MeSH** These files contain only MeSH code embeddings. (vocabulary size is 45,015) **BERT-CRel-words** These files contain only English word embeddings. (vocabulary size is 288,281) If you use our embeddings or find this code relevant or useful in any manner, please consider citing this paper: ``` | 1,560 |
birdortyedi/image-retrieval-with-capsules | ['image retrieval'] | ['Fashion Image Retrieval with Capsule Networks'] | models.py layers.py config.py blocks.py annotation_parser.py utils.py main.py TripletDirectoryIterator.py extract_neg_hard_pairs eval_partioner capsule_model transpose_conv_bn_block inception_block primary_capsule decoder_model residual_block conv_bn_block get_arguments Mask FashionCaps Length extract_embeddings train eval_results test FashionTripletCapsNet MultiGPUNet TripletDirectoryIterator decay_lr get_iterator triplet_cosine_loss squash triplet_eucliden_loss custom_generator margin_loss kl_divergence join mkdir move join list DirectoryIterator print len tqdm ImageDataGenerator next range split add concatenate residual_block residual_block conv_bn_block primary_capsule transpose_conv_bn_block print parse_args add_argument ArgumentParser on_epoch_end set_model batch_size Exception get_value input_size rotation_range save_weights whitening custom_generator save save_dir filepath brightness_range shift_fraction on_train_end str list train_on_batch TensorBoard append next shear_range range format initial_epoch get_iterator test lr mkdir hor_flip LearningRateScheduler compile join time print min tqdm zoom_range epochs on_epoch_begin len list extract_embeddings time sorted format print tqdm eval_results append range len format print tqdm vstack flow_from_directory ImageDataGenerator next array range predict len append list capsule_model Lambda l2_norm Model summary Input caps_model int constant euclidean_dist cosine_similarity int square relu int sqrt epsilon sum square next ImageDataGenerator TripletDirectoryIterator | # image-retrieval-with-capsules Fashion Image Retrieval with Capsule Networks *Accepted to the International Conference on Computer Vision, ICCV 2019, Workshop on Computer Vision for Fashion, Art and Design* ![architecture][arch] [arch]: ./assets/model_arc.png ## TODO LIST - [x] Literature search for clothing retrieval tasks - [x] Getting permission for data set - [x] Triplet directory iterator implementation - [x] Triplet-based capsule network implementation | 1,561 |
birdtianyu/hello-world | ['causal inference'] | ['Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution'] | Tools/BetaForOne.py Tools/Gauss.py images/BetaForOne.py Tools/L2_Regularizer.py Tools/Beta.py images/Beta.py Tools/Cubic_Spline_Interpolation.py calculate Draw solutionOfEquation calculateEquationParameters init Distribution L2 f L1 append init len append print calculateEquationParameters init array len append show plot title scatter legend | # hello-world This is my first project for learning CS. # Javascript 教程 https://wangdoc.com/javascript # JavaScript深入之从原型到原型链 https://github.com/mqyqingfeng/Blog/issues/2  # AI算法工程师手册 http://www.huaxiaozhuan.com/ # Xiu-Shen WEI profile | 1,562 |
bit-twidd1er/adversarial-yolo-snapshot | ['human detection'] | ['Fooling automated surveillance cameras: adversarial patches to attack person detection'] | models/caffe_net.py debug.py tools/lmdb/create_dataset.py oude versies/craft_adv_draaien_scalen_plaatsen_werkt.py train_patch.py tools/lmdb/train_lmdb.py valid.py darknet.py detect.py oude versies/rotate2.py test_patch.py region_loss.py train.py batch_rotate.py FocalLoss.py models/tiny_yolo.py dataset.py recall.py partial.py utils.py oude versies/rotate.py layers/batchnorm/bn.py oude versies/craft_adv_patch_plaatsen_werkt.py cfg.py patch_config.py tools/lmdb/lmdb_utils.py scripts/voc_label.py image.py batch_detect.py load_data.py tools/lmdb/plot_lmdb.py demo.py layers/batchnorm/bn_lib/__init__.py craft_adv.py eval.py median_pool.py scripts/voc_eval.py scripts/eval_widerface.py layers/batchnorm/build.py models/resnet.py NPSCalculator PatchTransformer TotalVariation MaxProbExtractor InriaDataset PatchApplier PatchApplier PatchTransformer InriaDataset load_fc save_fc save_conv_bn load_conv parse_cfg save_conv print_cfg load_conv_bn get_max_probability read_and_size_image calc_nps transform_patch total_variation apply_patch get_printability_array EmptyModule MaxPoolStride1 Darknet Reorg GlobalAvgPool2d listDataset extract demo detect_cv2 detect detect_skimage test FocalLoss distort_image random_distort_image load_data_detection scale_image_channel data_augmentation fill_truth_detection rand_scale MedianPool2d partial Experiment3LowRes ReproducePaperObj Experiment4ClassOnly BaseConfig Experiment1 Experiment2HighRes Experiment1Desktop eval_list RegionLoss build_targets train adjust_learning_rate test main PatchTrainer scale_bboxes logging do_detect read_truths get_image_size read_truths_args plot_boxes nms convert2cpu bbox_ious load_class_names file_lines convert2cpu_long softmax image2torch plot_boxes_cv2 read_data_cfg sigmoid bbox_iou get_region_boxes valid BN2dFunc BN2d_slow BN2d _import_symbols Scale Concat CaffeNet Eltwise parse_prototxt ResNet Bottleneck conv3x3 Resnet101 BasicBlock TinyYoloNet get_max_probability read_and_size_image calc_nps transform_patch total_variation apply_patch get_printability_array get_max_probability read_and_size_image calc_nps total_variation apply_patch get_printability_array transform_patch th_affine2d th_iterproduct th_bilinear_interp2d getRotationMatrix eval_widerface save_boxes parse_rec voc_eval _do_python_eval voc_ap convert_annotation convert createDataset writeCache checkImageIsValid lmdbDataset lmdb_nsamples train adjust_learning_rate test readline rstrip strip close dict open append split print int append split copy_ from_numpy reshape numel tofile is_cuda copy_ from_numpy reshape numel tofile is_cuda copy_ from_numpy numel tofile view print size contiguous sigmoid shape unsqueeze max list asarray float32 append full sqrt sum prod abs sum numel view size convert from_buffer contiguous ByteTensor div resize tobytes grid_sample FloatTensor ones abs size rand ConstantPad2d mypad pi affine_grid sqrt uniform sin ceil zeros round max int size ConstantPad2d mypad sqrt transform_patch data convert2cpu load_class_names VideoCapture read print exit plot_boxes_cv2 do_detect waitKey Darknet imshow load_weights resize print_network cuda load_class_names time print convert do_detect Darknet load_weights resize print_network plot_boxes cuda load_class_names time COLOR_BGR2RGB print plot_boxes_cv2 do_detect Darknet load_weights resize print_network imread cuda range cvtColor load_class_names time print plot_boxes_cv2 do_detect Darknet load_weights resize print_network imread cuda range data nms num_classes view logging Variable get_region_boxes num_anchors anchors size eval bbox_iou truths_length cuda range enumerate len list point tuple merge split mode list point tuple convert mode split merge randint uniform uniform distort_image rand_scale int FLIP_LEFT_RIGHT height random_distort_image resize transpose width randint float crop getsize loadtxt reshape min zeros max range height replace convert width data_augmentation fill_truth_detection print Darknet save_weights load_weights print_network max height rstrip replace resize print do_detect Darknet read_truths_args eval load_weights bbox_iou width plot_boxes cuda range len int bbox_ious fill_ ones size log t pow repeat zero_ bbox_iou zeros max range len param_groups range len logging model zero_grad listDataset DataLoader adjust_learning_rate save_weights dataset cuda region_loss seen module size enumerate time backward print zeros step len module print train PatchTrainer patch_configs sum exp max min max min max sort append zeros range len data max time exp view LongTensor print convert2cpu size contiguous convert2cpu_long sigmoid index_select unsqueeze append cuda range len int imwrite FloatTensor print putText FONT_HERSHEY_SIMPLEX rectangle get_color round range len height Draw get_color FloatTensor print text rectangle save width range len getsize size loadtxt reshape append range read_truths append rstrip height view contiguous from_buffer ByteTensor div width tobytes Image div unsqueeze forward cuda nms view exit width height ByteTensor eval tobytes time isinstance print Variable contiguous from_buffer dict strip split deepcopy range len read close open print data get_image_size listDataset Darknet DataLoader cuda open nms num_classes range load_class_names num_anchors anchors size close eval load_weights mkdir enumerate Variable print write read_data_cfg print_network get_region_boxes len append dir getattr _wrap_function readline line_type strip dict split parse_layer_block append open data print shape print item print shape show squeeze shape interpolate to_pil_image mul view print clamp size stride gather add floor tensor long print pi FloatTensor bmm th_bilinear_interp2d print th_nearest_interp2d size transpose contiguous unsqueeze repeat expand_as float th_iterproduct basename height write close width open round len load_class_names join int basename height save_boxes resize print convert do_detect Darknet load_weights mkdir width plot_boxes round cuda walk int parse findall text append find arange concatenate size maximum sum max range parse_rec cumsum argmax max sum range eps format astype mkdir float enumerate minimum join print sort maximum voc_ap argsort zeros bool array len join format print mean mkdir voc_eval enumerate int str parse join text convert write index getroot iter open find imdecode fromstring IMREAD_COLOR join str rstrip replace print len writeCache range open open logging write lmdbDataset max | # Adversarial YOLO This repository is based on the marvis YOLOv2 inplementation: https://github.com/marvis/pytorch-yolo2 This work corresponds to the following paper: https://arxiv.org/abs/1904.08653: ``` @inproceedings{thysvanranst2019, title={Fooling automated surveillance cameras: adversarial patches to attack person detection}, author={Thys, Simen and Van Ranst, Wiebe and Goedem\'e, Toon}, booktitle={CVPRW: Workshop on The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security}, year={2019} } | 1,563 |
biu-nlp/sprp-acl2018 | ['machine translation'] | ['Split and Rephrase: Better Evaluation and a Stronger Baseline', 'Split and Rephrase: Better Evaluation and Stronger Baselines'] | src/training_scripts/sprp_onmt_copy_512/test.py src/training_scripts/sprp_onmt_copy_256/validate_overtime.py src/training_scripts/sprp_onmt_copy_128/validate_overtime.py src/data/prepare-baseline-data-RDFs-relations.py src/training_scripts/sprp_onmt_copy_256_relations_split/test.py src/training_scripts/sprp_onmt_baseline_512_relations_split/validate.py src/training_scripts/sprp_onmt_baseline_128/validate_overtime.py src/training_scripts/sprp_onmt_baseline_256/validate_overtime.py src/training_scripts/sprp_onmt_copy_256_relations_split/validate_overtime.py src/training_scripts/sprp_onmt_copy_128_relations_split/validate_overtime.py src/training_scripts/sprp_onmt_copy_128_relations_split/test.py src/training_scripts/sprp_onmt_copy_256_relations_split/validate.py src/training_scripts/sprp_onmt_baseline_256_relations_split/test.py src/training_scripts/sprp_onmt_baseline_512_relations_split/test.py src/data/create_new_split.py src/training_scripts/sprp_onmt_copy_128/test.py src/training_scripts/sprp_onmt_copy_256/test.py src/training_scripts/sprp_onmt_copy_512_relations_split/validate_overtime.py src/training_scripts/sprp_onmt_baseline_256_relations_split/validate_overtime.py src/training_scripts/sprp_onmt_copy_128_relations_split/validate.py src/training_scripts/sprp_onmt_baseline_256/validate.py src/training_scripts/sprp_onmt_baseline_128_relations_split/test.py src/training_scripts/sprp_onmt_copy_256/validate.py src/training_scripts/sprp_onmt_baseline_128_relations_split/validate.py src/training_scripts/sprp_onmt_copy_512_relations_split/validate.py src/training_scripts/sprp_onmt_copy_128/validate.py src/training_scripts/sprp_onmt_baseline_512_relations_split/validate_overtime.py src/training_scripts/sprp_onmt_baseline_128/validate.py src/training_scripts/sprp_onmt_baseline_256_relations_split/validate.py src/training_scripts/sprp_onmt_copy_512_relations_split/test.py src/training_scripts/sprp_onmt_baseline_128/test.py src/training_scripts/sprp_onmt_baseline_256/test.py src/training_scripts/sprp_onmt_copy_512/validate.py src/training_scripts/sprp_onmt_baseline_512/test.py src/training_scripts/sprp_onmt_baseline_128_relations_split/validate_overtime.py src/training_scripts/sprp_onmt_baseline_512/validate.py src/training_scripts/sprp_onmt_baseline_512/validate_overtime.py src/data/prepare-evaluation-directories-RDFs-relations.py src/training_scripts/sprp_onmt_copy_512/validate_overtime.py src/evaluate.py evaluate_avg_concat_bleu multi_bleu relations_split_stats parse_RDF_xmls entities_split_stats split_data main process_sentdata_baseline process_sentdata main main main main main main main main main main main main main main main main main main main main main main main main main main main main main main main main main main main main load float sum len uuid4 str read format system split_data format system seed list defaultdict format remove relations_split_stats parse_RDF_xmls print set add choice entities_split_stats writelines sum len print float format print float format str defaultdict format readlines strip append listdir range split int str join format replace print strip write len match split int str join print strip len system write close match open split print evaluate_avg_concat_bleu mkdir list glob sort write filter isfile open eval_bleu_min_rouge | **Data and source code accompanying the paper "Split and Rephrase: Better Evaluation and a Stronger Baseline".** Roee Aharoni and Yoav Goldberg, ACL 2018 The data and some of the scripts are based on the repository by Narayan et al.: https://github.com/shashiongithub/Split-and-Rephrase This repository includes: - The proposed data split, under `data/baseline-seq2seq-split-RDFs-relations.zip`. - Scripts for: - Training our proposed models using openNMT-py (under `src/training_scripts`) - Evaluating the models as proposed by Narayan et al., 2017 (under `src/evaluate.py`) - Creating the RDF-based data split to reduce overlap between the development and test set found in the original split (under `src/data/create_new_split.py`) Feel free to reach out in `[email protected]` if you have any further questions! | 1,564 |
bj80heyue/Learning-to-Group | ['imitation learning'] | ['Merge or Not? Learning to Group Faces via Imitation Learning'] | code/reward_value_main.py code/frame.py code/Evaluator.py code/Trainer_reward.py code/Trainer.py code/Dataset.py code/reward_value_test.py code/Dicision.py code/load_test_data.py code/Evaluate.py Dataset identity_Dataset Dicision Precision_edge Recall misedge Recall_edge Precision evaluate load_test_data_set load_LFW_dataset load_lfw_dataset load_cpf_dataset load_nongt_nonquality load_MV_dataset load_HP_dataset trainNewModel trainXGBmodel trainSVMModel print range len list print dict append max range values len range len list dict append sum range len list print dict append sum max range len str list print write close keys dict open iter append max range len int list map computeQuality computeAffinity open splitlines split append Dataset range len pop int list map computeAffinity open splitlines split append Dataset range len list map dict splitlines split open Dataset range append len int list map computeAffinity open splitlines split append Dataset range len list map computeQuality dict computeAffinity splitlines split open Dataset range append len list map computeQuality computeAffinity open splitlines split append Dataset range len list map computeAffinity open splitlines split append Dataset range len svm_train print svm_parameter svm_problem svm_read_problem svm_save_model svm_train print svm_parameter svm_problem svm_read_problem svm_save_model print train DMatrix save_model | # Merge or Not? Learning to Group Faces via Imitation Learning ### Yue He, Kaidi Cao, Cheng Li, Chen Change Loy Code for the AAAI 2018 publication "Merge or Not? Learning to Group Faces via Imitation Learning".You can read a preprint on [Arxiv](https://arxiv.org/abs/1707.03986) ### Abstract Given a large number of unlabeled face images, face grouping aims at clustering the images into individual identities present in the data. This task remains a challenging problem despite the remarkable capability of deep learning approaches in learning face representation. In particular, grouping results can still be egregious given profile faces and a large number of uninteresting faces and noisy detections. Often, a user needs to correct the erroneous grouping manually. In this study, we formulate a novel face grouping framework that learns clustering strategy from ground-truth simulated behavior. This is achieved through imitation learning (a.k.a apprenticeship learning or learning by watching) via inverse reinforcement learning (IRL). In contrast to existing clustering approaches that group instances by similarity, our framework makes sequential decision to dynamically decide when to merge two face instances/groups driven by short- and long-term rewards. Extensive experiments on three benchmark datasets show that our framework outperforms unsupervised and supervised baselines. ### Dataset - GFW(Group Face in the Wild) - You can download the dataset from [here](https://www.dropbox.com/s/aktxy4phqaevmr7/GFW_RELEASE.tar?dl=0) - In the main folder,each subfolder represents an album. - An album contains number of identities person(**id from 3 to N**). - Specially,"**id = 1**" means passerby(apperaed only once in the album), "**id = 2**" means low-quality faces which cannot be recognize as a normal human face. | 1,565 |
bjerva/cassiodigitalis | ['word embeddings'] | ['Word Embeddings Pointing the Way for Late Antiquity'] | plot_heatmap.py person_to_concept.py read_w2vec read_concepts read_poi swap swap2 load append defaultdict | # cassiodigitalis This archive contains the code used in 'Word Embeddings Pointing the Way for Late Antiquity' within the Cassiodigitalis project (http://aclweb.org/anthology/W/W15/W15-3708.pdf). If you intend to use this code, please let me know and be aware that some clean-up would be in order. TODO: * Clean up code * Document code * Upload latin vector archive Copyright 2015 Johannes Bjerva Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. | 1,566 |
bkj/simple_las | ['information retrieval'] | ['Scaling Active Search using Linear Similarity Functions'] | test/regression-test.py test/experiments.py test/activeSearchInterface.py setup.py simple_las/simple_las.py simple_las/__init__.py SimpleLAS matrix_squeeze Parameters genericAS linearizedAS change_prev load_mnist load_a9a run_activesearch load_mnist change_prev load open load squeeze where open int permutation hstack choice SimpleLAS setLabel next_message choice append range | ### simple_las Simplified reimplementation of linearized active search from https://arxiv.org/pdf/1705.00334.pdf as implemented in https://github.com/AutonlabCMU/ActiveSearch/ See `simple_las.py` and `experiments.py` for details. __MIT LICENSE__ | 1,567 |
bkmi/equivariant-benchmark | ['molecular property prediction'] | ['Relevance of Rotationally Equivariant Convolutions for Predicting Molecular Properties'] | qm9_train.py schnet_tetris.py qm9_random_hp_search.py evaluation.py networks.py qm9_eval.py visualize_learning.py tabulate_evaluation.py arguments.py qm9_property_selector fix_old_args_with_defaults train_parser main evaluate_dataset evaluate record_versions create_kernel_conv ResNetwork PermutedBatchNorm1d OutputMLPNetwork NormVarianceLinear OutputScalarNetwork gate_error Network constants main load_directory_and_args main dict_to_statement randomized_hps create_model NoValidationDerivativeHook get_statistics configuration get_data build_standardized_mse_loss create_or_load_directory main train WallHook MemoryProfileHook AvgSpacial plot_training SE3Net get_dataset main train extract_evaluations main extract_evaluation u0 several_axes_semilog_df_dict read_log_files plot_all_targets all_targets mu_u0_compare_order pair_targets_directories mu_bs16_r50_compare_basis logs_from_folder_of_model_dirs mu semilogy_df_dict random_hp mu_compare_model_size collect_columns table_from_folder_of_model_dirs main random_hp_table gleichzeitig schnet_alpha_small_batches partition_polar_molecules l0net_l1net_mu_mae_correlation populate_page fig_axis_all_targets test_set_stds mu_r25_compare_bs fix_eval_csv_strings add_argument_group add_argument ArgumentParser add_argument ArgumentParser outnet_l0 outnet_layers mlp_layers outnet_l2 outnet_l1 radial_model checkpoint_interval mlp_neurons res keep_n_checkpoints outnet_neurons outnet_l3 mlp_out outnet_L optimizer items list join extend evaluate_dataset save append cat model requires_grad_ reset append to add_batch exists record_versions partial unsqueeze detach load join results add_argument load_recent file model_dir ArgumentParser cpu parse_args load join str create_model get_statistics configuration evaluate load_recent get_data model_dir load_state_dict load_directory_and_args info max randint rand contain_dir randomized_hps rmtree mkdir randint exists run get basicConfig print float32 device set_default_dtype join save model_dir makedirs QM9 db train_test_split AtomsLoader from_numpy get_atomref info create_kernel_conv ResNetwork AtomisticModel ShiftedSoftplus OutputMLPNetwork Softplus res Identity OutputScalarNetwork mlp_out Network Adam SGD Trainer parameters model_dir build_standardized_mse_loss info build_mse_loss mlp_out len DataParallel fix_old_args_with_defaults create_or_load_directory ArgumentParser parse_args train wall tensor stack arange len format StepLR backward print size f zero_grad get_dataset new_ones numpy item append to step range cross_entropy subplots set_yticklabels max show set_title set_xlabel pcolormesh savefig LineCollection concatenate tight_layout add_collection stack Normalize get_cmap set_array enumerate reshape min set_ylabel array set_ylim len SE3Net get_dataset append enumerate len join columns read_csv sort_index print to_csv zip extract_evaluations sorted pair_targets_directories list set_title semilogy collect_columns flatten zip append keys show subplots replace tight_layout populate_page savefig legend append items list semilogy semilogy_df_dict set_title set_xlabel set_ylabel zip show subplots several_axes_semilog_df_dict read_log_files tight_layout flatten savefig legend show subplots tight_layout read_log_files savefig legend compare_two_mu_mae_semilogs_by_order show subplots tight_layout read_log_files savefig legend compare_two_mu_mae_semilogs_by_order show subplots tight_layout read_log_files savefig legend compare_two_mu_mae_semilogs_by_order update items list show subplots replace semilogy tight_layout savefig show items list subplots replace semilogy tight_layout savefig legend QM9 train_test_split any print show items list subplots set_title semilogy set_xlabel tight_layout flatten savefig legend zip show items list subplots set_title semilogy set_xlabel tight_layout flatten savefig zip enumerate glob join glob join reduce glob join zip glob join append set_title zip semilogy show subplots set_xlabel set_xlim tight_layout plot_all_targets flatten savefig legend enumerate show subplots semilogy set_xlabel tight_layout set_ylabel savefig legend read_csv set_ylim load show subplots scatter numpy cpu abs QM9 train_test_split AtomsLoader value_counts apply mean test_set_stds fix_eval_csv_strings random_hp_table | # equivariant-benchmark Benchmarking equivariant neural networks. This was the package used for https://arxiv.org/abs/2008.08461 You have to install exactly the following commits if you want to make this work. ``` SchNetPack commit information: 3c58fd1a0b9fa2b046a88e89eb0d0c9051973046 * master 3c58fd1 [origin/master] removed the none agg mode origin https://github.com/bkmi/schnetpack.git ... which can be found at exactly this commit https://github.com/bkmi/schnetpack/commit/3c58fd1a0b9fa2b046a88e89eb0d0c9051973046 ``` | 1,568 |
blackfeather-wang/ISDA-for-Deep-Networks | ['data augmentation', 'image augmentation'] | ['Regularizing Deep Networks with Semantic Data Augmentation', 'Implicit Semantic Data Augmentation for Deep Networks'] | Image classification on CIFAR/networks/resnet.py Semantic segmentation on Cityscapes/libs/bn.py Semantic segmentation on Cityscapes/networks/__init__.py Semantic segmentation on Cityscapes/engine.py Semantic segmentation on Cityscapes/loss/loss.py Semantic segmentation on Cityscapes/utils/logger.py Semantic segmentation on Cityscapes/networks/deeplabv3.py Image classification on CIFAR/ISDA.py Image classification on CIFAR/networks/se_wideresnet.py Semantic segmentation on Cityscapes/evaluate.py Semantic segmentation on Cityscapes/dataset/datasets.py Image classification on CIFAR/networks/resnext.py Image classification on ImageNet/networks/densenet.py Semantic segmentation on Cityscapes/loss/criterion.py Image classification on CIFAR/networks/shake_shake.py Semantic segmentation on Cityscapes/utils/pyt_utils.py Semantic segmentation on Cityscapes/libs/functions.py Semantic segmentation on Cityscapes/libs/dense.py Visualizing deep features/aug_biggan512_imagenet.py Image classification on ImageNet/imagenet_DDP.py Semantic segmentation on Cityscapes/train_isda.py Semantic segmentation on Cityscapes/libs/build.py Image classification on CIFAR/cutout.py Image classification on ImageNet/ISDA_imagenet.py Image classification on CIFAR/networks/densenet_bc.py Semantic segmentation on Cityscapes/libs/__init__.py Image classification on ImageNet/networks/resnet.py Semantic segmentation on Cityscapes/loss/lovasz_losses.py Semantic segmentation on Cityscapes/networks/pspnet_isda.py Semantic segmentation on Cityscapes/utils/encoding.py Image classification on CIFAR/networks/se_resnet.py Image classification on CIFAR/networks/wideresnet.py Image classification on CIFAR/networks/se_module.py Semantic segmentation on Cityscapes/networks/deeplabv3_isda.py Visualizing deep features/cov_estimate.py Visualizing deep features/resnet.py Image classification on CIFAR/train.py Image classification on CIFAR/networks/shakedrop.py Semantic segmentation on Cityscapes/libs/residual.py Semantic segmentation on Cityscapes/networks/pspnet.py Semantic segmentation on Cityscapes/libs/misc.py Semantic segmentation on Cityscapes/train.py Image classification on CIFAR/networks/shake_pyramidnet.py Visualizing deep features/ISDA_imagenet.py Semantic segmentation on Cityscapes/libs/_ext/__init__.py Image classification on CIFAR/transforms.py Image classification on CIFAR/autoaugment.py ImageNetPolicy SVHNPolicy SubPolicy CIFAR10Policy Cutout EstimatorCV ISDALoss validate Full_layer AverageMeter accuracy save_checkpoint adjust_learning_rate mkdir_p main train RandomErasing _bn_function_factory DenseNet _DenseLayer _DenseBlock _Transition resnet110_cifar Bottleneck BasicBlock resnet1202_cifar ResNet_Cifar resnet164_cifar resnet20_cifar resnet32_cifar preact_resnet1001_cifar resnet1001_cifar PreActBasicBlock ResNet_MNIST preact_resnet110_cifar PreActBottleneck preact_resnet164_cifar resnet20_mnist resnet44_cifar conv3x3 resnet56_cifar PreAct_ResNet_Cifar ResNeXtBottleneck resnext29_16_64 resnext29_8_64 CifarResNeXt SELayer resnet44_cifar resnet110_cifar resnet1202_cifar resnet1001_cifar PreActBasicBlock ResNet_Cifar Bottleneck resnet164_cifar preact_resnet110_cifar resnet20_cifar conv3x3 preact_resnet1001_cifar resnet32_cifar PreActBottleneck BasicBlock resnet56_cifar PreAct_ResNet_Cifar preact_resnet164_cifar BasicBlock NetworkBlock WideResNet ShakeDropFunction ShakeDrop conv3x3 BasicBlock PyramidNet Bottleneck shake_resnet26_2x64d ShakeResNet ShakeShake ShakeBlock shake_resnext29_2x4x64d shake_resnet26_2x112d ShakeBottleNeck Shortcut shake_resnet26_2x32d ShakeResNeXt BasicBlock NetworkBlock WideResNet validate AverageMeter accuracy save_checkpoint ProgressMeter adjust_learning_rate main_worker main train EstimatorCV ISDALoss _bn_function_factory densenet265 DenseNet densenet169 densenet201 _DenseLayer _DenseBlock _Transition densenet121 conv1x1 resnext50_32x4d ResNet resnet50 resnext101_32x8d Bottleneck resnet152 conv3x3 resnet34 resnet18 BasicBlock resnet101 Engine predict_multiscale predict_sliding get_palette get_confusion_matrix predict_whole main get_parser pad_image set_bn_momentum lr_poly adjust_learning_rate get_parser set_bn_eval main str2bool VOCDataTestSet CSDataSet CSDataTestSet VOCDataSet InPlaceABNSyncWrapper ABN InPlaceABNSync InPlaceABNWrapper InPlaceABN _pair DenseModule _act_forward _count_samples _broadcast_shape InPlaceABNSync _check_contiguous InPlaceABN _reduce _check _act_backward GlobalAvgPool2d IdentityResidualBlock _import_symbols CriterionOhemDSN2 CriterionDSN CriterionOhemDSN OhemCrossEntropy2d lovasz_grad flatten_binary_scores iou binary_xloss xloss lovasz_hinge_flat StableBCELoss lovasz_hinge lovasz_softmax_flat isnan mean flatten_probas lovasz_softmax iou_binary ASPPModule ResNet Bottleneck conv3x3 Seg_Model ASPPModule ResNet Bottleneck conv3x3 Seg_Model ResNet Bottleneck PSPModule conv3x3 Seg_Model ResNet Bottleneck PSPModule conv3x3 Seg_Model CallbackContext allreduce AllReduce DataParallelModel _criterion_parallel_apply execute_replication_callbacks Reduce patch_replication_callback DataParallelCriterion get_logger LogFormatter load_model _dbg_interactive decode_predictions inv_preprocess decode_labels parse_devices ensure_dir all_reduce_tensor extant_file link_file reduce_tensor AverageMeter accuracy adjust_learning_rate ProgressMeter main train EstimatorCV ISDALoss conv1x1 resnext50_32x4d ResNet resnet50 resnext101_32x8d Bottleneck resnet152 conv3x3 resnet34 resnet18 BasicBlock resnet101 ImageNetPolicy SVHNPolicy SubPolicy CIFAR10Policy Cutout EstimatorCV ISDALoss validate Full_layer AverageMeter accuracy save_checkpoint adjust_learning_rate mkdir_p main train RandomErasing _bn_function_factory DenseNet _DenseLayer _DenseBlock _Transition resnet110_cifar Bottleneck BasicBlock resnet1202_cifar ResNet_Cifar resnet164_cifar resnet20_cifar resnet32_cifar preact_resnet1001_cifar resnet1001_cifar PreActBasicBlock ResNet_MNIST preact_resnet110_cifar PreActBottleneck preact_resnet164_cifar resnet20_mnist resnet44_cifar conv3x3 resnet56_cifar PreAct_ResNet_Cifar ResNeXtBottleneck resnext29_16_64 resnext29_8_64 CifarResNeXt SELayer resnet44_cifar resnet110_cifar resnet1202_cifar resnet1001_cifar PreActBasicBlock ResNet_Cifar Bottleneck resnet164_cifar preact_resnet110_cifar resnet20_cifar conv3x3 preact_resnet1001_cifar resnet32_cifar PreActBottleneck BasicBlock resnet56_cifar PreAct_ResNet_Cifar preact_resnet164_cifar BasicBlock NetworkBlock WideResNet ShakeDropFunction ShakeDrop conv3x3 PyramidNet Bottleneck shake_resnet26_2x64d ShakeResNet ShakeShake ShakeBlock shake_resnext29_2x4x64d shake_resnet26_2x112d ShakeBottleNeck Shortcut shake_resnet26_2x32d ShakeResNeXt NetworkBlock WideResNet validate AverageMeter accuracy save_checkpoint ProgressMeter adjust_learning_rate main_worker main train EstimatorCV ISDALoss _bn_function_factory densenet265 DenseNet densenet169 densenet201 _DenseLayer _DenseBlock _Transition densenet121 conv1x1 resnext50_32x4d ResNet resnet50 resnext101_32x8d Bottleneck resnet152 conv3x3 resnet34 resnet18 BasicBlock resnet101 Engine predict_multiscale predict_sliding get_palette get_confusion_matrix predict_whole main get_parser pad_image set_bn_momentum lr_poly adjust_learning_rate get_parser set_bn_eval main str2bool VOCDataTestSet CSDataSet CSDataTestSet VOCDataSet InPlaceABNSyncWrapper ABN InPlaceABNSync InPlaceABNWrapper InPlaceABN _pair DenseModule _act_forward _count_samples _broadcast_shape InPlaceABNSync _check_contiguous InPlaceABN _reduce _check _act_backward GlobalAvgPool2d IdentityResidualBlock _import_symbols CriterionOhemDSN2 CriterionDSN CriterionOhemDSN OhemCrossEntropy2d lovasz_grad flatten_binary_scores iou binary_xloss xloss lovasz_hinge_flat StableBCELoss lovasz_hinge lovasz_softmax_flat isnan mean flatten_probas lovasz_softmax iou_binary ASPPModule ResNet Bottleneck conv3x3 Seg_Model ASPPModule ResNet Bottleneck conv3x3 Seg_Model PSPModule PSPModule CallbackContext allreduce AllReduce DataParallelModel _criterion_parallel_apply execute_replication_callbacks Reduce patch_replication_callback DataParallelCriterion get_logger LogFormatter load_model _dbg_interactive decode_predictions inv_preprocess decode_labels parse_devices ensure_dir all_reduce_tensor extant_file link_file reduce_tensor AverageMeter accuracy adjust_learning_rate ProgressMeter main train EstimatorCV ISDALoss conv1x1 resnext50_32x4d ResNet resnet50 resnext101_32x8d Bottleneck resnet152 conv3x3 resnet34 resnet18 BasicBlock resnet101 validate layers cutout WideResNet SGD DataLoader adjust_learning_rate save_checkpoint cuda widen_factor max savetxt array dirname load_state_dict sum range format DenseNet Compose resume Normalize mkdir_p augment load int autoaugment Full_layer print resnext29_8_64 feature_num shake_resnext29_2x4x64d PyramidNet shake_resnet26_2x112d train resnext29_16_64 shake_resnet26_2x32d zero_grad cuda open update format size close lambda_0 item enumerate time criterion backward Variable print AverageMeter write step len update time format append criterion ave Variable print size AverageMeter write close eval item open cuda enumerate len makedirs copyfile join save param_groups cos pi topk size t eq mul_ expand_as append sum max ResNet_Cifar ResNet_MNIST ResNet_Cifar ResNet_Cifar ResNet_Cifar ResNet_Cifar ResNet_Cifar ResNet_Cifar ResNet_Cifar PreAct_ResNet_Cifar PreAct_ResNet_Cifar PreAct_ResNet_Cifar CifarResNeXt CifarResNeXt seed world_size spawn multiprocessing_distributed warn device_count manual_seed main_worker parse_args gpu workers data validate batch_size multiprocessing_distributed SGD DataParallel DistributedDataParallel ImageFolder DataLoader adjust_learning_rate save_checkpoint features cuda max set_device DistributedSampler rank savetxt load_state_dict append to sum range format init_process_group Compose start_epoch distributed lr resume Normalize isfile lambda_0 load int join evaluate print set_epoch parameters pre_train feature_num train epochs array gpu display epochs accuracy ProgressMeter gpu ProgressMeter print lr epochs DenseNet DenseNet DenseNet DenseNet load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict add_argument ArgumentParser range pad int isinstance transpose min shape Upsample cuda ceil zeros range max pad_image net isinstance transpose from_numpy shape Upsample cuda net data zoom predict_sliding copy shape zeros float bincount zeros astype range get_parser lr_poly eval __name__ __name__ isinstance fn append size enumerate size enumerate slope elu_cuda _check leaky_relu_cuda elu_inv_cuda leaky_relu_backward_cuda elu_backward_cuda slope _check leaky_relu_cuda dir _wrap_function getattr append callable cumsum sum len mean zip append float sum zip append float sum range mean lovasz_hinge_flat data lovasz_grad relu Variable sort dot float view Variable float flatten_binary_scores mean lovasz_softmax_flat data lovasz_grad Variable sort size dot append float abs size view filterfalse next iter enumerate ResNet load_model join isinstance _worker len start is_grad_enabled append range Lock list hasattr __data_parallel_replicate__ modules enumerate len replicate setFormatter getLogger addHandler formatter makedirs StreamHandler setLevel INFO FileHandler reduce clone div_ all_reduce clone div_ load items time list format join isinstance info set OrderedDict warning load_state_dict device keys int list format join endswith device_count info append range split remove format system makedirs embed load new shape zeros numpy array range enumerate load isinstance concatenate new shape numpy append zeros argmax array range enumerate uint8 astype shape zeros numpy range data_url ImageFolder open str resnet50 close lr lambda_0 join write train_url parameters cpu epochs model eval | # ISDA-Pytorch The Implicit Semantic Data Augmentation (ISDA) algorithm implemented in Pytorch. - (NeurIPS 2019) [Implicit Semantic Data Augmentation for Deep Networks](https://arxiv.org/abs/1909.12220) - (T-PAMI) [Regularizing Deep Networks with Semantic Data Augmentation](https://arxiv.org/abs/2007.10538) **Update on 2021/04/23: Release Code for Visualizing Deep Features on ImageNet!** **Update on 2021/01/17: Journal Version of ISDA is Accepted by T-PAMI!** **Update on 2020/04/25: Release Pre-trained Models on ImageNet.** **Update on 2020/04/24: Release Code for Image Classification on ImageNet and Semantic Segmentation on Cityscapes.** ## Introduction We propose a novel implicit semantic data augmentation (ISDA) approach to complement traditional augmentation techniques like flipping, translation or rotation. | 1,569 |
bloomberg/cnn-rnf | ['sentiment analysis'] | ['Convolutional Neural Networks with Recurrent Neural Filters'] | proc_data.py cnn_keras.py ConvInputLayer train_conv_net make_idx_data main parse_args get_idx_from_sent get_corpus build_data WordVecs main parse_args argmax print to_categorical fit shape Model info accuracy_score Input max range compile len append append array get_idx_from_sent add_argument ArgumentParser load train_conv_net make_idx_data word_idx_map info parse_args dataset W open get_corpus set add info append max range enumerate len dump build_data emb_path output WordVecs | # Convolutional Neural Networks with Recurrent Neural Filters Author: Yi Yang Contact: [email protected] ## Basic description This is the Python implementation of the recurrent neural filters for convolutional neural networks, described in Yi Yang "Convolutional Neural Networks with Recurrent Neural Filters" EMNLP 2018 [[pdf]](https://arxiv.org/abs/1808.09315) | 1,570 |
bloomberg/sgtb | ['entity disambiguation'] | ['Collective Entity Disambiguation with Structured Gradient Tree Boosting'] | structured_learner.py structured_gradient_boosting.py StructuredGradientBoosting train_model main make_idx_data time print get_acc make_idx_data StructuredGradientBoosting fit append array len train_model parse_args add_argument ArgumentParser | # Structured Gradient Tree Boosting Author: Yi Yang Contact: [email protected] ## Basic description This is the Python implementation of the structured gradient tree boosting model for collective named entity disambiguation, described in Yi Yang, Ozan Irsoy, and Kazi Shefaet Rahman "Collective Entity Disambiguation with Structured Gradient Tree Boosting" NAACL 2018 [[pdf]](https://arxiv.org/pdf/1802.10229.pdf) | 1,571 |
bluecoderbot/hed-master | ['boundary detection', 'edge detection'] | ['Holistically-Nested Edge Detection'] | python/caffe/io.py python/caffe/test/test_python_layer.py scripts/download_model_binary.py python/caffe/net_spec.py examples/hed/solve.py python/caffe/test/test_net.py tools/extra/resize_and_crop_images.py python/draw_net.py python/caffe/test/test_net_spec.py src/caffe/test/test_data/generate_sample_data.py python/caffe/draw.py python/caffe/pycaffe.py tools/extra/extract_seconds.py scripts/cpp_lint.py python/classify.py python/caffe/test/test_solver.py python/caffe/classifier.py python/caffe/test/test_python_layer_with_param_str.py tools/extra/parse_log.py python/caffe/__init__.py python/caffe/test/test_layer_type_list.py scripts/copy_notebook.py python/caffe/detector.py python/detect.py interp_surgery upsample_filt main main main parse_args Classifier Detector get_edge_label draw_net get_layer_label get_pydot_graph choose_color_by_layertype get_pooling_types_dict draw_net_to_file Transformer blobproto_to_array datum_to_array array_to_blobproto arraylist_to_blobprotovecor_str array_to_datum resize_image blobprotovector_str_to_arraylist load_image oversample Layers Function Parameters Top NetSpec assign_proto param_name_dict to_proto _Net_blobs _Net_forward_all _Net_set_input_arrays _Net_backward _Net_params _Net_forward _Net_outputs _Net_forward_backward_all _Net_blob_loss_weights _Net_batch _Net_inputs TestLayerTypeList simple_net_file TestNet lenet TestNetSpec silent_net anon_lenet exception_net_file parameter_net_file SimpleLayer TestPythonLayer ParameterLayer python_net_file ExceptionLayer SimpleParamLayer TestLayerWithParam python_param_net_file TestSolver ParseNolintSuppressions CheckVlogArguments CheckSectionSpacing FindNextMultiLineCommentEnd ReplaceAll CheckForFunctionLengths _SetOutputFormat _IsTestFilename _VerboseLevel CheckBraces RemoveMultiLineComments ResetNolintSuppressions CheckForNonStandardConstructs _SetVerboseLevel PrintUsage _NestingState CheckIncludeLine CheckAccess _CppLintState Search CheckInvalidIncrement RemoveMultiLineCommentsFromRange CleansedLines CheckForBadCharacters UpdateIncludeState FindPreviousMatchingAngleBracket CheckEmptyBlockBody FindNextMultiLineCommentStart Match _NamespaceInfo CheckMakePairUsesDeduction CheckCheck IsBlankLine _SetFilters ProcessLine _FunctionState CheckPosixThreading GetLineWidth GetHeaderGuardCPPVariable IsCppString _IncludeState CheckSpacing _ClassInfo CheckForCopyright IsErrorSuppressedByNolint ProcessFileData CheckForMultilineCommentsAndStrings CloseExpression _PreprocessorInfo _OutputFormat CheckForIncludeWhatYouUse CheckSpacingForFunctionCall FindEndOfExpressionInLine FindNextMatchingAngleBracket _SetCountingStyle ProcessFile _IncludeError CleanseRawStrings CheckAltTokens CheckForNewlineAtEOF ParseArguments CheckForNonConstReference PrintCategories _Filters main FilesBelongToSameModule CheckCStyleCast FileInfo _BlockInfo CheckForHeaderGuard CheckCaffeDataLayerSetUp ReverseCloseExpression CleanseComments _DropCommonSuffixes _ClassifyInclude CheckStyle CheckCaffeAlternatives FindStartOfExpressionInLine _ShouldPrintError CheckComment Error _GetTextInside CheckLanguage CheckCaffeRandom GetPreviousNonBlankLine reporthook parse_readme_frontmatter model_checks_out valid_dirname get_start_time extract_seconds extract_datetime_from_line get_log_created_year write_csv parse_log fix_initial_nan_learning_rate save_csv_files main parse_args parse_line_for_net_output ResizeCropImagesMapper PILResizeCrop OpenCVResizeCrop print shape upsample_filt model_def endswith ArgumentParser save mean_file channel_swap output_file dirname expanduser parse_args input_file predict Classifier set_mode_cpu load time isdir print add_argument set_mode_gpu pretrained_model gpu len DataFrame Detector format to_hdf detect_selective_search mean set_index to_csv detect_windows read_csv add_argument ArgumentParser read NetParameter output_image_file rankdir Merge draw_net_to_file items list DESCRIPTOR batch_size str num_output get_pooling_types_dict add_edge get_edge_label list Dot get_layer_label values name choose_color_by_layertype Edge Node bottom append type layer add_node top shape BlobProto extend flat extend BlobProtoVector ParseFromString BlobProtoVector extend tostring shape Datum flat data len astype float32 tile zoom tuple resize fill empty array concatenate shape tile empty array LayerParameter list NetParameter _to_proto extend Counter OrderedDict values iteritems isinstance extend add getattr setattr items list layers index set outputs _forward len items list _backward layers inputs index set len items list asarray extend copy next _batch iter forward values len items list asarray backward extend next _batch zip_longest zip iter forward values len ascontiguousarray list concatenate iter num zeros next range values len NamedTemporaryFile str close write data Pooling pool1 conv2 pool2 ip1 relu1 SoftmaxWithLoss Convolution NetSpec DummyData ip2 ReLU InnerProduct label conv1 Pooling SoftmaxWithLoss Convolution DummyData ReLU InnerProduct data NetSpec DummyData Silence data2 error search add group clear compile compile compile SetOutputFormat SetCountingStyle SetFilters _Filters startswith IsErrorSuppressedByNolint _ShouldPrintError write IncrementErrorCount replace append Match group find startswith endswith range error FindNextMultiLineCommentEnd RemoveMultiLineCommentsFromRange FindNextMultiLineCommentStart rstrip find range len FindEndOfExpressionInLine range len FindStartOfExpressionInLine error min search I range len FileInfo RepositoryName sep sub ParseNolintSuppressions error startswith split GetHeaderGuardCPPVariable enumerate error enumerate error len error replace count error find error find error find error find error Search error match InnermostClass replace error escape Match Search error group Search Check error lines Count End group Begin NumLines Match raw_lines range Search error match group error Match group pop group append Search pop group append Search elided replace CheckSpacingForFunctionCall rfind error len group min CloseExpression NumLines sub find CheckComment Match range Search lines_without_raw_strings error group starting_linenum Match range Search error rfind len group ReverseCloseExpression Search Match CloseExpression find error Match CloseExpression find elided error strip group FindEndOfExpressionInLine find Match range CloseExpression len error Match finditer normalize isinstance GetLineWidth int InnermostClass CheckCheck error CheckAltTokens CheckBraces CheckSpacing CheckSectionSpacing CheckEmptyBlockBody CheckAccess GetHeaderGuardCPPVariable lines_without_raw_strings _DropCommonSuffixes RepositoryName match split CheckNextIncludeOrder CanonicalizeAlphabeticalOrder FileInfo error search group SetLastHeader match _ClassifyInclude Match pop end search set append values M rstrip replace CheckCStyleCast error _GetTextInside CheckIncludeLine search group lstrip startswith Match ResetSection Search split rfind error group ReverseCloseExpression lstrip findall Match range Search ReplaceAll error Match Search endswith replace setdefault group search CleanseComments open list FilesBelongToSameModule error search copy sub NumLines FullName keys range error search CheckPosixThreading ParseNolintSuppressions CheckVlogArguments CheckMakePairUsesDeduction CheckCaffeDataLayerSetUp CheckLanguage CheckInvalidIncrement CheckCaffeRandom CheckForNonConstReference check_fn Update CheckForNonStandardConstructs CheckStyle raw_lines CheckForMultilineCommentsAndStrings CheckCaffeAlternatives CheckForFunctionLengths CleansedLines _NestingState CheckForBadCharacters CheckForNewlineAtEOF _IncludeState RemoveMultiLineComments CheckForCopyright ResetNolintSuppressions CheckForHeaderGuard NumLines CheckCompletedBlocks CheckForIncludeWhatYouUse range ProcessLine _FunctionState Error rstrip endswith len write ProcessFileData _SetVerboseLevel range split write exit join write exit _VerboseLevel int getopt _SetOutputFormat set _SetVerboseLevel PrintCategories _SetFilters _OutputFormat PrintUsage _SetCountingStyle split getreader ParseArguments ResetErrorCounts stderr exit verbose_level PrintErrorCounts StreamReaderWriter ProcessFile getwriter int time write flush load join index int rfind datetime split getctime year strip extract_datetime_from_line get_start_time total_seconds strip write get_log_created_year close extract_datetime_from_line open float get_log_created_year compile fix_initial_nan_learning_rate search group OrderedDict append float join basename write_csv print excel parse_log save_csv_files output_dir logfile_path | ## Holistically-Nested Edge Detection Created by Saining Xie at UC San Diego ### Introduction: <img src="http://pages.ucsd.edu/~ztu/hed.jpg" width="400"> We develop a new edge detection algorithm, holistically-nested edge detection (HED), which performs image-to-image prediction by means of a deep learning model that leverages fully convolutional neural networks and deeply-supervised nets. HED automatically learns rich hierarchical representations (guided by deep supervision on side responses) that are important in order to resolve the challenging ambiguity in edge and object boundary detection. We significantly advance the state-of-the-art on the BSD500 dataset (ODS F-score of .790) and the NYU Depth dataset (ODS F-score of .746), and do so with an improved speed (0.4s per image). Detailed description of the system can be found in our [paper](http://arxiv.org/abs/1504.06375). ### Citations If you are using the code/model/data provided here in a publication, please cite our paper: @InProceedings{xie15hed, author = {"Xie, Saining and Tu, Zhuowen"}, Title = {Holistically-Nested Edge Detection}, | 1,572 |
bmmi/denoising-fluorescence | ['denoising'] | ['A Poisson-Gaussian Denoising Dataset with Real Fluorescence Microscopy Images'] | denoising/utils/noise.py denoising/utils/practices.py denoising/models/unet.py denoising/utils/misc.py denoising/utils/data_loader.py denoising/train_dncnn.py denoising/test_example.py denoising/train_n2n.py denoising/models/dncnn.py denoising/utils/metrics.py denoising/benchmark.py denoising/utils/plot.py Parser Parser DnCNN_NRL DnCNN UnetN2N conv3x3 UpsamplingNearest2d UnetN2Nv2 is_image_file fluore_to_tensor DenoisingFolder load_denoising_test_mix load_denoising load_denoising_n2n_train DenoisingFolderN2N DenoisingTestMixFolder pil_loader cal_psnr2 cal_ssim cal_psnr module_size stitch_pathes mkdirs mkdir to_numpy add_noise plot_row save_stats save_samples OneCycleScheduler annealing_cos find_lr adjust_learning_rate annealing_linear open int16 uint8 dtype view to from_buffer float32 ByteTensor from_numpy mode unsqueeze int32 tobytes tensor array len DataLoader Compose DenoisingFolder DataLoader Compose DenoisingFolderN2N DataLoader Compose DenoisingTestMixFolder mean clamp clamp array clamp array requires_grad ndarray isinstance Tensor detach makedirs mkdir named_parameters zeros isinstance view float size mean repeat device to poisson set_aspect format set_axis_off close colorbar imshow toggle_label figure ImageGrid savefig to_numpy save_image range tick_params enumerate format plot xlabel close ylabel savetxt savefig figure legend subplots collections max update_ticks set_edgecolor colorbar set_linewidth imshow contourf savefig close tight_layout set_offset_position enumerate set_powerlimits axes set_axis_off min len cos pi param_groups view backward print zero_grad log10 item loss_fn step net append len | # [Fluorescence Microscopy Denoising (FMD) dataset](https://drive.google.com/drive/folders/1aygMzSDdoq63IqSk-ly8cMq0_owup8UM) Code for CVPR 2019 paper "A Poisson-Gaussian Denoising Dataset with Real Fluorescence Microscopy Images", [arXiv.1812.10366](https://arxiv.org/abs/1812.10366). ```latex @inproceedings{zhang2018poisson, title={A Poisson-Gaussian Denoising Dataset with Real Fluorescence Microscopy Images}, author={Yide Zhang and Yinhao Zhu and Evan Nichols and Qingfei Wang and Siyuan Zhang and Cody Smith and Scott Howard}, booktitle={CVPR}, year={2019} } | 1,573 |
bnewm0609/comm-eval | ['text generation'] | ['Communication-based Evaluation for Natural Language Generation'] | color_featurizers.py experiment.py monroe_data.py evaluation.py models.py caption_featurizers.py color_eval_utils.py glove2dict CharacterTokenizer SentencePieceTokenizer CaptionFeaturizer CaptionIndexer get_pretrained_glove EndingTokenizer Tokenizer WhitespaceTokenizer target_col_for_context_id plot_score_dists plot_score_dists_for_metric ListenerMetrics flatten NgramMetrics normalize color_phi_fourier ColorFeaturizer color_phi_id Score Speaker calculate_scores delta_e_dist Regressor score_model FeatureHandler evaluate_model ColorSelector LiteralSpeaker ImaginativeListener BeamNode ColorOnlyBaseline ColorGenerator SingleColorCaptionGenerator ColorEncoder PytorchModel PragmaticListener LiteralSpeakerScorer CaptionEncoder LiteralListener CaptionGenerator Color MonroeDataEntry MonroeData get glove2dict normal format print empty kendalltau show format print len extend set flatten title scatter spearmanr xticks pearsonr violinplot enumerate plot_score_dists_for_metric print reshape astype float32 outer meshgrid array pearsonr spearmanr auto value print extend copy mean regressor append max flatten round sum list print len test_features model_scorer zip argmax predictions_to_scores predict test_targets | # Communication-based Evaluation for Natural Language Generation ## Project Description Currently many NLG models are evaluated using n-gram overlap metrics like BLEU and ROUGE, but these metrics do not capture semantics let alone speaker intentions. People use language to communicate, and if we want NLG models to effectively communicate with people, we should evaluate them based on this property. We illustrate how this communication-based evaluation would work and compare it to traditional n-gram overlap scores using the color reference game scenario from Monroe et al., 2017. We collected color reference game captions of various qualities and investigated how well models that use the captions to play the reference game can distinguish between dffierent quality captions compared to n-gram overlap metrics. Our data can be found in [data/csv/clean_data.csv](./data/csv/clean_data.csv). The code to recreate the plots and analysis in the paper using the data and pretrained models can be found in this [jupyter notebook](./notebooks/Replication%20Example%20Notebook.ipynb). ## Setup Create a conda environment with required packages by running `conda create env --file=environment.yml`. If any problems arise while installing the `nlgeval` package see [https://github.com/Maluuba/nlg-eval#setup](https://github.com/Maluuba/nlg-eval#setup) ## Folder and File Descriptions [caption_featurizers.py](./caption_featurizers.py) contains code to process captions with an appropriate tokenizer into a format expected by the models. [color_featurizers.py](./color_featurizers.py) is a similar featurizer for the color inputs. [evaluation.py](./evaluation.py) contains performance metric code for all models. | 1,574 |
bnusss/GGN | ['time series'] | ['A General Deep Learning Framework for Network Reconstruction and Dynamics Learning'] | utils/process.py data/data_generator_cml.py utils/util.py cml_128_not_batch.py data/data_generator_bn.py tools.py train_bn.py utils/model.py train_cml_kuramoto.py utils/process_128.py val_dynamics test main train_gumbel train_dynamics constructor_evaluator_withdiag get_test_accu get_valid_loss train_batch_generator crop_data load_kuramoto_ggn tpr_fpr load_bn_ggn cacu_accu_new_loss load_cml_ggn constructor_evaluator weighted test tran_batch_dyn train_dyn_learner_cml train_batch_net train_batch_dyn_cml cacu_accu train_dyn_learner_bn cacu_mat train_batch_dyn_bn get_offdiag train_batch_generator train_batch_dyn val_dynamics test main train_gumbel train_dynamics ten2bin generate_network init_node init_node_num get_next_node derrida_curve func_table spread_prob detail_phase_graph get_innode exam_phase_graph spread draw_state_graph init_orderly hamming_distance bin2ten distance_grow logistic_map CMLDynamicSimulator GumbelGraphNetwork GumbelGraphNetworkClf Gumbel_Generator calc_tpr_fpr constructor_evaluator train_net_reconstructor val_dynamics_learner train_dynamics_learner skip_diag_strided calc_tpr_fpr constructor_evaluator train_net_reconstructor val_dynamics_learner train_dynamics_learner skip_diag_strided gumbel_softmax_sample save_file read_file get_offdiag gumbel_sample gumbel_softmax zero_grad numpy cuda nodes imshow append to range dynamics_steps close mean item sample prediction_steps enumerate backward print train_dynamics_learner figure step append state_dict print nodes val_dynamics_learner gumbel_path save item sample to dynamics_path prediction_steps enumerate constructor_evaluator print train_net_reconstructor nodes mean reconstruct_steps item append to range prediction_steps enumerate load append constructor_evaluator print gumbel_path nodes val_dynamics_learner eval load_state_dict item sample to dynamics_path prediction_steps enumerate ArgumentParser device experiments seed load_cml_ggn Adam skip val_dynamics parse_args to train_gumbel range inf test manual_seed print add_argument parameters epochs train_dynamics size tolist range to int32 device ones cuda range sum sign mean numpy sample get_offdiag abs cuda range append abs size mean numpy sample sum cuda range append float size range append range concatenate ones shape concatenate int permutation LongTensor print debug tolist DataLoader TensorDataset DoubleTensor append zeros array range print DataLoader load asarray print weighted crop_data DataLoader cuda max str print tolist train_dyn_learner_bn append cuda enumerate str print dynamics_steps nodes train_dynamics_learner mean item append to range prediction_steps enumerate str constructor_evaluator_withdiag print tolist train_batch_generator append cuda enumerate str append add_figure print dynamics_steps close nodes train_dynamics_learner mean imshow figure item sample to numpy range prediction_steps enumerate pos_enc cacu_accu backward size zero_grad dyn_learner unsqueeze repeat permute long loss_fn step cuda backward size step zero_grad mean unsqueeze repeat dynamics_learner cpu zeros abs mse_loss range loss_fn backward zero_grad drop_temperature dyn_learner unsqueeze repeat permute sample step long loss_fn drop_temperature dyn_learner unsqueeze repeat permute sample long print debug size permute cpu range cacu_accu drop_temperature dyn_learner unsqueeze repeat sample long str get_test_accu constructor_evaluator_withdiag cpu mean array cuda tpr_fpr cacu_accu backward size zero_grad dyn_learner unsqueeze repeat permute loss_fn step long load_kuramoto_ggn add_edge DiGraph barabasi_albert_graph append randint watts_strogatz_graph range add_node int toarray print pow randint sum range len randint number_of_nodes range range ten2bin append range append number_of_nodes range enumerate number_of_nodes len append range enumerate enumerate append int range str tolist range len int add_edge number_of_nodes ten2bin DiGraph len pow spread init_orderly append bin2ten range add_node pow int range str number_of_nodes toarray print get_next_node append range str number_of_nodes print float range len str plot print init_node xlabel ylabel ylim scatter savefig spread append hamming_distance xlim range str subplot plot print xlabel min spread_prob hamming_distance ylabel savefig spread figure append randint float range len backward size step zero_grad mean unsqueeze repeat dynamics_learner cpu zeros abs mse_loss range size mean unsqueeze repeat dynamics_learner zeros float abs mse_loss range backward size step zero_grad drop_temperature mean unsqueeze repeat dynamics_learner sample zeros abs range calc_tpr_fpr as_strided strides numpy ravel skip_diag_strided rand cuda size gumbel_sample gumbel_softmax_sample load print strip append float open | # A general deep learning framework for network reconstruction and dynamics learning This repository will contain the official PyTorch implementation of: <br> **A general deep learning framework for network reconstruction and dynamics learning**<br> Zhang Zhang, Yi Zhao, Jing Liu, Shuo Wang,Ruyi Tao, Ruyue Xin and Jiang Zhang<sup>\*</sup>(<sup>\*</sup>: Corresponding author) <br> [Download PDF](https://appliednetsci.springeropen.com/articles/10.1007/s41109-019-0194-4#citeas)<br> <img src="./img/threekindofsys.png" width="800px" alt=""> <br> ### Abstract: Many complex processes can be viewed as dynamical systems on networks. However, in real cases, only the performances of the system are known, the network structure and the dynamical rules are not observed. Therefore, recovering latent network structure and dynamics from observed time series data are important tasks because it may help us to open the black box, and even to build up the model of a complex system automatically. Although this problem hosts a wealth of potential applications in biology, earth science, and epidemics etc., conventional methods have limitations. In this work, we introduce a new framework, Gumbel Graph Network (GGN), which is a model-free, data-driven deep learning framework to accomplish the reconstruction of both network connections and the dynamics on it. Our model consists of two jointly trained parts: a network generator that generating a discrete network with the Gumbel Softmax technique; and a dynamics learner that utilizing the generated network and one-step trajectory value to predict the states in future steps. We exhibit the universality of our framework on different kinds of time-series data: with the same structure, our model can be trained to accurately recover the network structure and predict future states on continuous, discrete, and binary dynamics, and outperforms competing network reconstruction methods. | 1,575 |
bobby-he/bayesian-ntk | ['gaussian processes'] | ['Bayesian Deep Ensembles via the Neural Tangent Kernel'] | bayesian_ntk/utils.py bayesian_ntk/train_utils.py setup.py bayesian_ntk/train.py bayesian_ntk/predict.py bayesian_ntk/config.py bayesian_ntk/models.py get_model_config homoscedastic_model activation_fn _mean_prediction _inv_operator _get_dependency _is_on_cpu _gp_inference_mat _arr_is_on_cpu _get_matrices gp_inference _add_diagonal_regularizer _make_flatten_uflatten _posterior_std train_model l2_distance_sq reweight_params fetch_regularisation_fn l2_norm_sq fetch_new_predict_fn jvp_fn get_toy_data serial activation_fn jit range _get_dependency kernel_fn canonicalize_get _add_diagonal_regularizer size divmod dot fl _make_flatten_uflatten op dot diag transpose op ndarray hasattr isinstance _get_matrices _mean_prediction _inv_operator reshape ntk nngp _posterior_std homoscedastic_model init_fn new_predict_fn fetch_regularisation_fn grad_loss len inputs opt_update sgd opt_init fetch_new_predict_fn jit get_params range split jvp_fn reweight_params tree_flatten range len append tree_flatten append tree_flatten Data target_fn concatenate reshape uniform linspace split | # Bayesian Deep Ensembles via the Neural Tangent Kernel Repository, `bayesian-ntk`, to accompany the paper [Bayesian Deep Ensembles via the Neural Tangent Kernel](https://arxiv.org/abs/2007.05864). Please note that analytic NTKGP posterior is now implemented in [neural-tangents](https://github.com/google/neural-tangents), more details [here](https://github.com/google/neural-tangents/pull/93). <p align="center"> <img align="middle" src="./plots/toy_1d_plot.png" width="666" /> </p> ## Requirements To install requirements: ```setup pip install -r requirements.txt ``` | 1,576 |
bogireddytejareddy/micro-expression-recognition | ['facial expression recognition'] | ['Spontaneous Facial Micro-Expression Recognition using 3D Spatiotemporal Convolutional Neural Networks'] | SMIC/MicroExpSTCNN.py SMIC/Late_MicroFuseNet.py CASME-SQUARE/Late_MicroExpFuseNet.py CASME-SQUARE/Intermediate_MicroExpFuseNet.py CASME-SQUARE/MicroExpSTCNN.py SMIC/Intermediate_MicroFuseNet.py NoFaces annotate_landmarks get_landmark TooManyFaces NoFaces annotate_landmarks get_landmark TooManyFaces NoFaces annotate_landmarks get_landmark TooManyFaces NoFaces annotate_landmarks get_landmark TooManyFaces matrix detector putText enumerate circle str | # Spontaneous Facial Micro Expression Recognition using 3D Spatio-Temporal Convolutional Neural Networks ## Abstract Facial expression recognition in videos is an active area of research in computer vision. However, fake facial expressions are difficult to be recognized even by humans. On the other hand, facial micro-expressions generally represent the actual emotion of a person, as it is a spontaneous reaction expressed through human face. Despite of a few attempts made for recognizing micro-expressions, still the problem is far from being a solved problem, which is depicted by the poor rate of accuracy shown by the state-of-the-art methods. A few CNN based approaches are found in the literature to recognize micro-facial expressions from still images. Whereas, a spontaneous micro-expression video contains multiple frames that have to be processed together to encode both spatial and temporal information. This paper proposes two 3D-CNN methods: MicroExpSTCNN and MicroExpFuseNet, for spontaneous facial micro-expression recognition by exploiting the spatiotemporal information in CNN framework. The MicroExpSTCNN considers the full spatial information, whereas the MicroExpFuseNet is based on the 3D-CNN feature fusion of the eyes and mouth regions. The experiments are performed over CAS(ME)^2 and SMIC micro-expression databases. The proposed MicroExpSTCNN model outperforms the state-of-the-art methods. ## Prerequisites - [Keras 2.0.0](https://github.com/fchollet/keras) Strictly ## Results | Method | Proposed Year | Method Type | CAS(ME)^2 | SMIC | | ------ | ------------- | ----------- | --------- | ---- | | LBP-TOP | 2013 | HCM | - | 42.72% | | STCLQP | 2016 | HCM | - | 64.02% | | 1,577 |
bootphon/measuring-regularities-in-word-embeddings | ['word embeddings'] | ['Analogies minus analogy test: measuring regularities in word embeddings'] | metrics.py analogy_decomposition.py models.py plot.py analogy_test.py random_sets.py read_bats.py delta_sim decompo all_decompositions save_decompo analogy_decomposition start_end_words analogy_decomposition_reference bats_test evaluate_word_analogies_bats most_similar save_analogy_test permutation_onecycle_avoidtrue normal_and_shuffled_offsets offset similarite_offsets OCS_PCS shuffled_offsets metrics_from_model shuffled_directions offsets permutation_onecycle save_metrics context_sentence token_embedding offset_contextual sublist word_embedding download_model download_all load_model_custom load_model load_model_fromlist vocabulary_model clean_pairs_fromvocab clean_pairs plot_result save_img plot_metrics plot_decomposition metrics_random_from_model similarities_random ocs_pcs_random shuffled_offsets_random save_metrics_random offsets_perms_random similarities_shuffle_random bats_names_pairs vocab_bats array list norm mean append range len list norm mean append range len list norm mean append range len delta_sim analogy_decomposition analogy_decomposition_reference start_end_words str T print to_csv strftime mkdir DataFrame array ndarray init_sims isinstance astype float32 index add dot argsort word_vec append vocab str most_similar getLogger print set info append float listdir range len append evaluate_word_analogies_bats listdir str print to_csv strftime mkdir append DataFrame mean convert_tokens_to_ids tokenize array randint list permutation range len range permutation_onecycle len array permutation_onecycle append list range len list print mean append range roc_auc_score len get_vector token_embedding range len join sublist convert_tokens_to_ids squeeze mean stack permute append context_sentence tensor tokenize cat len permutation_onecycle_avoidtrue permutation_onecycle append range len print clean_pairs shuffled_offsets offsets OCS_PCS similarite_offsets print bats_names_pairs normal_and_shuffled_offsets str T print to_csv strftime mkdir DataFrame from_pretrained str join glove2word2vec move print extractall close mkdir download numpy ZipFile open print download_model from_pretrained str download_model print numpy print list keys set vocabulary_model add_trace Bar Figure update_layout array update_yaxes Bar Figure update_layout array print str mkdir write_image str read_csv int list vocab_bats print hstack min choice permutation_onecycle append get_vector array range len print min permutation_onecycle append range len print print print hstack array OCS_PCS str similarities_random shuffled_offsets_random ocs_pcs_random append similarite_offsets vocabulary_model offsets_perms_random similarities_shuffle_random bats_names_pairs normal_and_shuffled_offsets str T print to_csv strftime mkdir DataFrame len join str set append listdir hstack | # Measuring regularities in word embeddings Implementation of the Python code used for the CoNLL 2020 article: "Analogies minus analogy test: measuring regularities in word embeddings". This code allow easy computation of the Offset Concentration Score (OCS) and Pairwise Consistency Score (PCS) on a given model, pretrained or custom; on the Bigger Analogy Test Set dataset. Other experiences of the paper can be replicated: - Decomposing the analogy score, the reference score, and $\Delta_sim$. - Computing the OCS and PCS on randomized BATS sets. - Computing the analogy test accuracy (for the normal and "honest" version) of a model. - Plotting easily some results. ## Getting Started ### Prerequisites | 1,578 |
borbysh/coolmomentum | ['unity', 'physical simulations', 'stochastic optimization'] | ['CoolMomentum: A Method for Stochastic Optimization by Langevin Dynamics with Simulated Annealing'] | models/resnet_cifar10.py main_cool.py resnet_adam200.py coolmomentum_tf.py resnet_cool200.py train_cifar10.py optimizers/coolmom_pytorch.py utils_cool.py log_writer.py coolmom_pytorch.py optimizers/__init__.py resnet_sgd200.py Coolmomentum SGD Write_to_log model_fn export _verify_non_empty_string main _select_tables_from_flags resnet_v2 resnet_v1 lr_schedule resnet_layer resnet_v2 resnet_v1 resnet_layer resnet_v2 resnet_v1 lr_schedule resnet_layer build_model train_epoch test train_cifar10 parse_args build_dataset drop_connect build_learning_rate build_optimizer TpuBatchNormalization archive_ckpt get_ema_vars EvalCkptDriver BatchNormalization DepthwiseConv2D ResNet ResNet18 ResNet34 Bottleneck ResNet101 ResNet50 BasicBlock ResNet152 Coolmomentum supported_optimizers parse_optim_args parse_optimizer required_length get_global_step weight_decay add_n variables_to_restore data_format CrossShardOptimizer depth_coefficient set_learning_phase drop_connect_rate transpose build_optimizer get_collection cast batch_norm_epsilon batch_norm_momentum sum use_tpu softmax_cross_entropy format one_hot build_learning_rate build_model get_ema_vars width_coefficient info isinstance num_label_classes reshape base_learning_rate float32 UPDATE_OPS ExponentialMovingAverage dropout_rate bigtable_eval_prefix bigtable_column_family bigtable_column_qualifier _verify_non_empty_string bigtable_table bigtable_train_prefix bigtable_instance input_image_size export_saved_model build_image_serving_input_fn info input_image_size tpu efficientnet_tpu_params TPUClusterResolver model_dir TPUEstimator export _load_global_step_from_checkpoint_dir max train_steps use_async_checkpointing eval_batch_size AsyncCheckpointSaverHook checkpoints_iterator archive_ckpt append iterations_per_loop export_dir latest_checkpoint efficientnet_params efficientnet_edgetpu_params startswith info steps_per_eval build_imagenet_input time int evaluate min num_eval_images dict model_name train RunConfig print Conv2D conv int add Model Input range resnet_layer int add Model Input range resnet_layer parse_known_args add_argument ArgumentParser DataLoader Compose CIFAR10 to DataParallel print eval criterion backward print zero_grad tqdm train step max net enumerate model max log open seed watch append build_dataset CrossEntropyLoss range format build_model close test init manual_seed optim parse_optimizer print min train_epoch parameters epochs wandb_project int cos float32 pi cast info exponential_decay cond int MomentumOptimizer float32 GradientDescentOptimizer cast fatal RMSPropOptimizer info cond div floor join basename generate_checkpoint_state_proto Exists Glob DeleteRecursively Copy MakeDirs info float split append trainable_variables get_collection global_variables items list format add_argument ArgumentParser parse_args parse_optim_args format | # CoolMomentum Optimizer for Deep Neural Networks
## Stochastic Optimization by Langevin Dynamics with Simulated Annealing
This repository contains implementations for [CoolMomentum: A Method for Stochastic Optimization by Langevin Dynamics with Simulated Annealing](https://www.nature.com/articles/s41598-021-90144-3) (published in Scientific Reports) in TensorFlow and PyTorch.
### Usage
| 1,579 |
bordesf/Infusion | ['denoising'] | ['Learning to Generate Samples from Noise through Infusion Training'] | parzen.py normalization.py run.py BatchNormLayer LocalResponseNormalization2DLayer batch_norm log_mean_exp cross_validate_sigma theano_parzen get_parzen_estimator get_nll print_samples get_probabilites batch_iterator samples_mix run_chaine_samples_for sample_normal get_first_term compute_step_samples eval_log_gaussian compute_step_train log_add_exp Compute_probs_gaussian build_MLP_mnist_bn run_chaine_train_for create_MNIST_data_streams BatchNormLayer NonlinearityLayer getattr identity cross_validate_sigma format print reshape sqrt logspace theano_parzen std get_nll int list time extend ceil float parzen range append len max log_mean_exp dimshuffle pi sqrt shared matrix sum log print mean theano_parzen append argmax get_nll int subplots reshape set_yticks grid axis close sqrt imshow set_xticks savefig figure swapaxes clip SequentialScheme default_stream ShuffledScheme Cast ScaleAndShift MNIST var reshape mean swapaxes zeros range asarray concatenate print repeat Compute_probs_gaussian expand_dims DenseLayer InputLayer ReshapeLayer batch_norm sample_normal dimshuffle eval_log_gaussian samples_mix log log_add_exp constant get_output samples_mix log eval_log_gaussian log_add_exp disconnected_grad get_first_term eval_log_gaussian compute_step_train range sample_normal get_output sample_normal compute_step_samples stack range | # Infusion Training <p align="left"> <img src="infusion.jpg" width="800"/> </p> This repository contains the code for the paper: <br /> **"Learning to Generate Samples from Noise through Infusion Training."**, <br /> Florian Bordes, Sina Honari, Pascal Vincent, ICLR 2017. <br /> https://arxiv.org/abs/1703.06975 In order to use it, you have to install Theano, Lasagne, Fuel and theirs dependencies. To run an experiment on a GPU, you have to use: | 1,580 |
borgr/EoE | ['grammatical error correction'] | ['Automatic Metric Validation for Grammatical Error Correction'] | utils.py create_db.py compare_measures.py plotting.py extract_matches parse_combined_corpora extract_references sentence_input_to_sentence_list _bleu_wrapper measure_corpora score_corpora score_sentences _m2_wrapper _glue_wrapper ranks_to_scores Imeasure_num_callable shuffle_sources Imeasure_callable score_changes_per_type Grammaticallity_callabale USim_callabale CombinedReference_less_callabale score_corpus parse_combined_sentence corpus_input_to_sentence_list _ibleu_wrapper choose_source_per_sentence print_list_statistics clean_tmp parse_Usim_sentence _levenshtein_wrapper from_measure add_measure choose_source_per_chain parse_grammatical_corpora main sentence_length parse_Usim_corpora Reference_less_callabale _levenshtein_references_wrapper _levenshtein_score extract_references_per_chain assess_measures convert2edits parse_grammatical_sentence create_levelled_files create_ranks create_corpora main parse_m2_to_db plot_correlations log_hist plot_gleu ncr split_changes_by_annot partial_order_kendal_tau_from_counts binomial_parameters_by_mean_and_var apply_changes load_object_by_ext npermutations traverse_ranks find_in_iter get_lines_from_file list_to_hashable get_hash kendall_partial_order_from_seq kendall_mergesort kendall_in_parts traverse_chains iterate_chains choose_uniformely save_object_by_ext append choose_uniformely chooser transpose join save_object_by_ext parallel_to_m2 load_annotation isfile get_hash save_object_by_ext transpose isfile convert2edits corpus_measure join starmap print close repeat zip Pool len print squeeze index mean score_corpora ravel len isfile save_object_by_ext print transpose sentence_measure next iterate_chains zip append iter convert2edits corpus_measure len append min linspace len append iterate_chains zip append iterate_chains zip extract_matches extract_references preprocess_corpus_level_func measure_corpora score_corpora score_sentences load_object_by_ext choose_corpus_source str list basename ranks_to_scores preprocess_sentence_level_func traverse_ranks dirname print_list_statistics append choose_source range kendall_partial_order_from_seq traverse_chains mean from_measure iterate_chains join items save_object_by_ext print extract_references_per_chain isfile len print transpose spearmanr zip pearsonr append range len append parse_grammatical_sentence parse_grammatical_corpora ucca_parse_sentences list sentence_input_to_sentence_list set ucca_parse_sentences list corpus_input_to_sentence_list set create_one_sentence_files list sentence_input_to_sentence_list set create_one_sentence_files list corpus_input_to_sentence_list set verbose_manual_analysis linspace create_corpora create_ranks source seed str exit USim_callabale manual_analysis debug add_measure random_seed join print matches_num force assess_measures print_help max_permutations makedirs print rmtree isdir join makedirs isfile load_object_by_ext Lock join makedirs isfile load_object_by_ext Lock levenshtein ncr int list permutations npermutations save_object_by_ext print min find_in_iter filter list_to_hashable append parse_m2_to_db range enumerate len print min append choose_uniformely range len prob parse_m2_to_db max basename tolist find_in_iter dirname append expand_dims range zip binomial choose_uniformely enumerate join int save_object_by_ext print array show set distplot show subplots print text transpose set_xlim regplot get_ylim zip DataFrame set_ylim show list format subplots print text len regplot mean get_ylim round zip append DataFrame enumerate sort md5 encode update print list mul reduce factorial values len list mul min reduce range add set repeat zip enumerate len cdf sqrt cdf list print lexsort kendall_mergesort sqrt zip append sum range len print int sorted split append repeat zip iterate_chains traverse_chains | # EoE The project was written in python3.6 Create_db.py contains a code to create MAEGE lattices compare_measures.py runs comparisons and analysis, please use -h flag to find usage Examples of current measures and ways to pass them to the evaluation scripts can be found in the code (e.g. lt which is run by calling an outside script) utils.py contains among other things the way to compute partial ordering kendall tau. If you use this repository in a research, please cite | 1,581 |
borgr/USim | ['grammatical error correction'] | ['Reference-less Measure of Faithfulness for Grammatical Error Correction'] | USim.py USim get_parsed_subdir ucca_parse_sentences parsed_sentence2xml parse_location ucca_parse_files get_parsed_subdirs get_sentence_id rerank_by_uccasim _ucca_parse_text announce_finish main get_parser normalize_sentence source_sentences parser_path ucca_parse_files source_files reference_files reference_sentences ucca_parse_sentences parse_dir lower strip sub str starmap list join print len close reduce smart_open ucca_parse add load_annotation zip sep Pool get_roro_packed join Parser join list str print set realpath get_parsed_subdirs get_sentence_id isfile _ucca_parse_text enumerate makedirs isdir print realpath parse_location _ucca_parse_text makedirs join list parse remove endswith passage2file parse_location dirname listdir get_parser enumerate from_text makedirs len realpath get_sentence_id walk enumerate realpath get_sentence_id walk print realpath normalize_sentence parse_location get_sentence_id get_parsed_subdir parsed_sentence2xml linux_distribution set call Beep Popen split | # USim monolingual sentence similarity measure Please cite our [NAACL2018 paper](http://www.aclweb.org/anthology/N18-2020) if you use our measure or annotations. ```@inproceedings{choshen2018reference, title={Reference-less Measure of Faithfulness for Grammatical Error Correction}, author={Choshen, Leshem and Abend, Omri}, booktitle={Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)}, volume={2}, pages={124--129}, year={2018} | 1,582 |
borhanMorphy/light-face-detection | ['face detection'] | ['LFFD: A Light and Fast Face Detector for Edge Devices'] | fastface/metric/utils.py fastface/transforms/compose.py fastface/utils/cache.py fastface/arch/lffd/__init__.py fastface/utils/random.py fastface/transforms/functional/interpolate.py doc_samples/inference.py fastface/metric/widerface_ap.py fastface/metric/ar.py fastface/utils/box.py fastface/transforms/rotate.py fastface/transforms/augmentation/lffd_random_sample.py tutorials/bentoml_deployment/service.py fastface/transforms/functional/__init__.py fastface/transforms/interpolate.py tests/test_base_apis.py demo.py fastface/loss/focal_loss.py fastface/arch/lffd/blocks/anchor.py fastface/transforms/augmentation/__init__.py setup.py fastface/utils/__init__.py fastface/utils/preprocess.py fastface/__init__.py fastface/utils/data.py fastface/transforms/functional/color_jitter.py fastface/transforms/discard.py fastface/arch/lffd/module.py fastface/transforms/augmentation/random_rotate.py fastface/transforms/augmentation/blur.py tests/test_metric_apis.py fastface/arch/lffd/blocks/conv.py tutorials/bentoml_deployment/build.py docs/source/conf.py fastface/arch/lffd/blocks/head.py fastface/adapter/gdrive.py fastface/dataset/fddb.py fastface/dataset/widerface.py tutorials/widerface_benchmark/test_widerface.py fastface/utils/config.py fastface/adapter/extract_handler.py tutorials/bentoml_deployment/test.py fastface/transforms/normalize.py tests/test_loss_apis.py fastface/arch/lffd/blocks/__init__.py fastface/metric/__init__.py fastface/utils/kernel.py fastface/transforms/functional/rotate.py fastface/loss/__init__.py fastface/dataset/__init__.py fastface/adapter/__init__.py fastface/api/__init__.py tests/utils.py tests/test_utility_apis.py fastface/metric/functional/__init__.py fastface/metric/functional/ap.py fastface/utils/cluster.py fastface/loss/iou_loss.py fastface/metric/ap.py fastface/arch/lffd/blocks/backbone_v1.py fastface/version.py fastface/module.py fastface/transforms/augmentation/color_jitter.py fastface/transforms/functional/pad.py fastface/arch/lffd/blocks/backbone_v2.py fastface/utils/vis.py fastface/adapter/http.py doc_samples/export_to_onnx.py fastface/transforms/pad.py doc_samples/export_to_torchscript.py doc_samples/training.py fastface/utils/geo.py fastface/transforms/__init__.py tests/test_dataset_apis.py fastface/arch/lffd/blocks/resblock.py fastface/dataset/base.py fastface/transforms/augmentation/random_horizontal_flip.py doc_samples/testing.py tests/test_transforms_apis.py tests/test_module_apis.py main load_image get_arguments get_version FaceDetector ExtractHandler GoogleDriveAdapter HttpAdapter download_object list_arch_configs download_pretrained_model list_pretrained_models list_archs get_arch_config LFFD Anchor LFFDBackboneV1 LFFDBackboneV2 conv3x3 conv1x1 LFFDHead ResBlock BaseDataset default_collate_fn _IdentitiyTransforms _ellipse2box _load_single_annotation_fold FDDBDataset _get_validation_set WiderFaceDataset _parse_annotation_file BinaryFocalLoss DIoULoss AveragePrecision AverageRecall generate_prediction_table WiderFaceAP average_precision Compose FaceDiscarder Interpolate ConditionalInterpolate Normalize Padding Rotate RandomGaussianBlur ColorJitter LFFDRandomSample RandomHorizontalFlip RandomRotate adjust_saturation adjust_contrast adjust_brightness interpolate pad rotate cxcywh2xyxy jaccard_vectorized intersect jaccard_centered generate_grids intersect_centered batched_nms xyxy2cxcywh get_checkpoint_cache_dir get_data_cache_dir get_model_cache_dir ensure_path get_cache_dir KMeans get_registry get_pkg_root_path get_pkg_arch_path get_registry_path get_arch_cls discover_archs get_arch_pkg default_collate_fn get_rotation_matrix apply_conv2d get_gaussian_kernel prepare_batch adjust_results generate_uniform_boxes draw_rects render_predictions render_targets test_api_exists test_list_archs test_list_pretrained_models test_list_arch_configs test_get_arch_config test_download test_api_exists test_loss_api_exists test_loss_build test_api_exists test_get_available_metrics test_module_from_checkpoint test_api_exists test_module_export_to_torchscript test_module_forward test_module_export_to_onnx test_module_build_from_yaml test_module_predict test_module_from_pretrained test_module_build test_interpolate_call test_padding_call test_api_exists test_cache_func_exists test_visualize_func_exists test_config_func_exists build_module_args load_image_as_tensor mixup_arguments load_image get_img_paths FaceDetectionService add_argument ArgumentParser imread ascontiguousarray from_pretrained show summarize render_predictions eval load_image to predict format info get_registry join download_object get_model_cache_dir get_arch_cls xavier_normal_ weight fill_ Conv2d xavier_normal_ weight fill_ Conv2d items list ndarray isinstance contiguous astype float32 from_numpy zip enumerate tan min cos pi sin atan max join format tuple int list map append len join str concatenate ones squeeze append loadmat bitwise_or range jaccard_vectorized size where stack zip append zeros max range sum arange cumsum unique_consecutive size clone generate_prediction_table append tensor empty max cat enumerate min max enhance min max enhance min max enhance resize int max array ones shape int max min matmul stack empty array clip get_rotation_matrix meshgrid arange intersect expand_as intersect_centered expand_as clamp size min expand max size min expand dtype to max get_pkg_root_path get_pkg_root_path get_registry_path join listdir get_pkg_arch_path discover_archs sep replace get_pkg_arch_path format replace get_pkg_arch_path dir discover_archs import_module sep radians cos sin uint8 as_strided subtract tuple astype shape pad append range einsum exp arange square pi meshgrid contiguous pad unsqueeze interpolate append tensor enumerate repeat enumerate cxcywh2xyxy astype float32 from_numpy uniform fromarray list choice rectangle zip keys fromarray list astype choice rectangle int32 keys fromarray list astype choice rectangle int32 keys list_pretrained_models list_archs list_arch_configs list_arch_configs get_arch_config download_pretrained_model loss_cls getattr loss metric_cls metric getattr get_arch_config build from_pretrained join from_checkpoint download_pretrained_model get_model_cache_dir build_from_yaml from_pretrained load_image eval predict from_pretrained eval forward load_image_as_tensor rand to_torchscript build eval forward name rand astype float32 build eval run load_image Interpolate interpolate padding shape load_image max Padding list_arch_configs list_archs load_image | # FastFace: Lightweight Face Detection Framework  [](https://fastface.readthedocs.io/en/latest/?badge=latest) [](https://pepy.tech/project/fastface)   **Easy-to-use face detection framework, developed using [pytorch-lightning](https://www.pytorchlightning.ai/).**<br> **Checkout [documentation](https://fastface.readthedocs.io/en/latest/) for more.** ## Key Features * :fire: **Use pretrained models for inference with just few lines of code** | 1,583 |
boschresearch/PAC_GP | ['generalization bounds', 'gaussian processes'] | ['Learning Gaussian Processes by Minimizing PAC-Bayesian Generalization Bounds'] | pac_gp/pac_gp_example.py pac_gp/gp/gpr.py pac_gp/utils/utils.py pac_gp/utils/transformations.py pac_gp/gp/mean_functions.py pac_gp/utils/metrics.py pac_gp/gp/kerns.py pac_gp/sparseGP_study.py pac_gp/epsilon_study.py pac_gp/gp/pac_gp.py pac_gp/utils/utils_tf.py pac_gp/utils/gpflow_wrapper.py pac_gp/gp/conditionals.py pac_gp/utils/bin_kl.py pac_gp/utils/data_generator.py pac_gp/utils/plotting.py pac_gp/utils/load_dataset.py pac_gp/utils/helpers.py feature_conditional base_conditional GPR SVGP GPRFITC RBF Zero Product MeanFunction Additive Constant Linear PAC_INDUCING_GP PAC_GP_BASE PAC_INDUCING_HYP_GP PAC_HYP_GP PAC_SPARSE_GP_BASE PAC_FULL_GP_BASE binary_kl_inv_grad binary_kl_inv binary_kl BinaryKLInv BinaryKLInvGrad generate_sin_data GPflowSparseWrapper GPflowFullWrapper GPflowWrapper load gibbs_risk_noiseless epsilon_loss inv_gauss empirical_risk bayes_risk neg_ll mean_squared_error plot_lines plot Transform LowerTriangular Log1pe Chain ITransform Configurable variable_summaries flatten vec_to_tri expand_vector clamp_and_round py_func eye K Kdiag matrix_triangular_solve transpose square reduce_sum matmul matrix_band_part stack cholesky tile expand_dims py_func exp isinf binary_kl_inv log isinf asarray broadcast_arrays inf zeros_like log uniform load_boston sqrt exp sum cdf reset_index arange xlabel yticks ylabel div sqrt bar set_visible ylim legend append N xticks enumerate len plot_lines add_subplot figure subplots_adjust round log concatenate cumsum reshape zip append sum array list constant zip str get_default_graph randint | # PAC-GP This is the companion code for the Gaussian process training method reported in the paper [Learning Gaussian Processes by Minimizing PAC-Bayesian Generalization Bounds by David Reeb et al., NIPS 2018](https://papers.nips.cc/paper/7594-learning-gaussian-processes-by-minimizing-pac-bayesian-generalization-bounds). The code allows the users to experiment with the proposed GP training method. Please cite the above paper when reporting, reproducing or extending the results. ## Purpose of the project This software is a research prototype, solely developed for and published as part of the publication cited above. It will neither be maintained nor monitored in any way. ## Requirements | 1,584 |
boschresearch/Structured_DGP | ['gaussian processes'] | ['Beyond the Mean-Field: Structured Deep Gaussian Processes Improve the Predictive Uncertainties'] | experiments/extrapolation_study.py experiments/utils/prepare_data.py experiments/utils/evaluate_performance.py structured_dgp/full_dgp.py experiments/demo_boston.py tests/test_Approx_Layer.py tests/test_KL.py experiments/utils/plots.py structured_dgp/all_layers.py experiments/utils/models.py tests/test_ELBO.py tests/test_sample_first_layer.py experiments/utils/optimise.py experiments/utils/logger_io.py structured_dgp/init_linear.py tests/test_propagate.py experiments/convergence_study.py tests/test_whitePrior.py calculate_tll direct_comparison save_logger get_tlls get_logger save_tlls prepare_model run_adam Logger covariance_plots convergence_plot prepare_dataset_extrapolate prepare_dataset Layers Fast_Stripes_Arrow_Layers Stripes_Arrow_Layers Fully_Coupled_Sampling_Layers Fully_Coupled_Layers Mean_Field_DGP Full_DGP Approx_Full_DGP Full_DGP_Sampled Full_DGP_Base Fast_Approx_Full_DGP init_linear TestELBO my_FullDGP my_ApproxDGP sal0dgp my_FullDGP TestELBO TestKL sal0dgp TestPropagate autoflow_Layer TestFirstLayer autoflow_saldgp my_FullDGP TestWhitening logsumexp predict_y shape logpdf zeros sum append mean join list elbo_log File close keys mkdir time_log create_group join File array close join calculate_tll File close mkdir array create_group print File close array exists set_trainable RBF Mean_Field_DGP value block_diag copy Gaussian Full_DGP Full_DGP_Sampled Fast_Approx_Full_DGP variables_initializer time make_optimize_tensor Variable anchor enquire_session mean assign_add AdamOptimizer cast Logger exponential_decay range zeros run show join T subplots set_title close colorbar imshow savefig abs log show join list plot xlabel rc add_subplot ylabel close set_visible savefig figure legend xlim keys seed int permutation kmeans2 load_boston set_random_seed values seed int randn kmeans2 load_boston mean set_random_seed dot argsort zeros std values set_trainable svd T Zero zip concatenate copy dot input_dim Identity append Linear | # Structured-DGP
This is the companion code for the inference methods for deep Gaussian Processes
reported in the paper [Beyond the Mean-Field: Structured Deep Gaussian Processes Improve the Predictive Uncertainties by Jakob Lindinger et al.](https://arxiv.org/abs/2005.11110).
The code allows the users to experiment with the proposed DGP inference method.
Please cite the above paper when reporting, reproducing or extending the results.
<p align="center">
<img src="img/thumbnail_landscape.png" width="350">
</p>
| 1,585 |
boschresearch/hierarchical_anomaly_detection | ['anomaly detection'] | ['Understanding Anomaly Detection with Deep Invertible Networks through Hierarchies of Distributions and Features'] | invglow/invertible/split_merge.py invglow/invertible/graph.py invglow/invertible/branching.py invglow/models/patch_glow.py invglow/create_tiny.py invglow/invertible/inv_permute.py invglow/invertible/pure_model.py invglow/invertible/gaussian.py invglow/util.py invglow/invertible/noise.py invglow/models/glow.py invglow/losses.py invglow/datasets.py invglow/invertible/splitter.py invglow/invertible/conditional.py invglow/invertible/coupling.py invglow/invertible/expression.py SerraReplicationCode/ReferenceGlowVsDirectPng.py invglow/invertible/sequential.py invglow/evaluate.py invglow/invertible/distribution.py invglow/invertible/init.py invglow/scheduler.py invglow/models/class_conditional.py invglow/load_data.py invglow/invertible/identity.py invglow/invertible/affine.py invglow/invertible/view_as.py invglow/exp.py invglow/invertible/categorical_mixture.py invglow/invertible/actnorm.py invglow/invertible/inverse.py invglow/main.py invglow/folder_locations.py load_image get_FASHION_MNIST get_celeb_a check_dataset load_brats get_one_hot_encode PreprocessedLoader get_CIFAR100 load_train_test_with_defaults CelebA get_MNIST get_SVHN load_train_test_as_glow postprocess load_tiny_imagenet LSUN TinyImageNet BRATS2018 preprocess load_train_test get_tiny_imagenet TensorDatasetWithTransforms get_CIFAR10 load_celeb_a load_train_test_tiny get_out_diffs_per_set evaluate_without_noise get_nlls_only_final get_grey_loaders collect_log_dets get_rgb_loaders identity collect_for_loaders get_log_dets_probs_for_set compute_bpds compute_aupr_for_scores compute_func_for_sets collect_outputs _evaluate_without_noise collect_for_dataloader compute_auc_for_scores get_nlls collect_log_dets_for_loaders set_non_finite_to get_in_diffs_per_set apply_outlier_losses_from_outs apply_inlier_loss_from_outs apply_inlier_losses BaseFineIndependent run_exp apply_outlier_losses load_data nll_diff_loss nll_class_loss ScheduledOptimizer set_random_seeds flatten_2d interpolate_nans_in_df check_gradients_clear log_sum_exp clip_to_finite_max view_2d np_to_var ensure_cuda step_and_clear_gradients enforce_2d ensure_on_same_device grads_all_finite var_to_np PureActNorm ActNorm inverse_elu init_act_norm AffineCoefs AdditiveCoefs MultiplicativeBlock AffineModifier AffineBlock AdditiveBlock SwitchX1X2 ChunkByIndex ChunkChans ChunkByIndices CatChans InvertibleClassConditional ApplyAndCat ConditionTransformWrapper CatChansMerger CouplingLayer PerClass Unlabeled NClassIndependentDist ZeroDist MergeLogDets Expression get_mixture_gaussian_log_probs get_gauss_samples get_gaussian_log_probs ConditionalNode MergeLogDetsNode AbstractNode CatAsListNode get_all_nodes get_nodes_by_tags get_nodes_by_names Node IntermediateResultsNode CatChansNode SelectNode Identity init_all_modules prepare_init Inverse InvPermute Shuffle GaussianNoiseGates UniformBins UniNoise GaussianNoise ModelThrowAwayLogDet get_arg_names InvertibleSequential unsqueeze2d SubsampleSplitter squeeze2d ChansFraction ChunkChansIn2 EverySecondChan Flatten2dAndCat Flatten2d ViewAs latent_model convert_class_model_for_multi_scale_nll create_glow_model flow_block Conv2dZeros get_dense_block compute_same_pad Conv2d split_glow_into_pre_dist_and_dist convert_glow_to_pre_dist_model get_conv_block WrapForPatches create_patch_glow_model unfold_patches_batch fold_to_images_batch preprocess compute_auc_for_s_scores create_png_bpds compute_glow_bpds read seek floor clamp Compose get_one_hot_encode extend Path CIFAR10 CIFAR100 Compose get_one_hot_encode extend Compose get_one_hot_encode extend Path SVHN arange DataLoader np_to_var round sorted ones transpose Resize append sum one_hot concatenate glob Compose choice join int extend TensorDatasetWithTransforms tiny_data array len get_FASHION_MNIST get_MNIST get_CIFAR10 get_SVHN get_CIFAR100 MNIST Compose get_one_hot_encode extend Resize RandomAffine Path append FashionMNIST Compose get_one_hot_encode extend Resize RandomAffine Path append check_dataset DataLoader join Compose brats_data BRATS2018 DataLoader get_celeb_a DataLoader celeba_data int list Compose Subset CelebA range DataLoader get_tiny_imagenet tiny_imagenet_data TinyImageNet Compose list keys concatenate list keys concatenate load_tiny_imagenet LSUN dict DataLoader load_celeb_a lsun_data load_train_test dict load_train_test copy concatenate print _evaluate_without_noise set_non_finite_to compute_auc_for_scores items list set_random_seeds get_nlls print get_nodes_by_names get_grey_loaders get_nlls_only_final convert_class_model_for_multi_scale_nll Expression get_rgb_loaders PreprocessedLoader OrderedDict func norm view ones var_to_np nan sum rand_like norm view ones var_to_np unsqueeze cuda nan sum rand_like roc_auc_score concatenate concatenate average_precision_score collect_for_dataloader concatenate T list get_nodes_by_names dict IntermediateResultsNode collect_for_dataloader zip dict dict enumerate tqdm apply_inlier_loss_from_outs get_nlls nll_class_loss backward squeeze type_as mean unsqueeze warning apply_outlier_losses_from_outs get_nlls nll_class_loss detach relu backward min mean len latent_model LambdaLR STARTED get_nodes_by_names MergeLogDets InvertibleSequential cuda run list set_random_seeds BaseFineIndependent load_state_dict InvertibleClassConditional create_patch_glow_model var_to_np module range compute_bpds SummaryWriter ScheduledOptimizer init_all_modules add_event_handler Adamax info CatChansNode item EPOCH_COMPLETED create_glow_model load items deepcopy print Engine dict load_data ModelCheckpoint LSUN dict DataLoader lsun_data preproced load_train_test zeros_like binary_cross_entropy_with_logits step zero_grad param_groups param_groups warning nanmax deepcopy max size squeeze len any cuda exp squeeze sum max seed manual_seed_all manual_seed pin_memory tensor asarray astype hasattr copy isnan interp array flatnonzero log zeros_like hasattr rand_like modules append cuda net cat enumerate Variable normal_ fmod unsqueeze is_cuda exp pi sqrt unsqueeze sum log detach exp pi sqrt unsqueeze sum log detach append extend prev append get_all_nodes dict get_all_nodes hasattr modules parameters size view contiguous size view contiguous Flatten2d ActNorm CatAsListNode Unlabeled NClassIndependentDist ChunkByIndices Node append SelectNode InvertibleSequential range CatAsListNode prev warning module_list append InvertibleSequential module enumerate get_nodes_by_names CatChansNode ChunkByIndices Node get_nodes_by_names CatChansNode Flatten2d ActNorm CatAsListNode Unlabeled NClassIndependentDist ChunkChans Node append SelectNode SubsampleSplitter InvertibleSequential range Sequential Conv2dZeros Conv2d Sequential Linear AffineCoefs block_fn ActNorm AdditiveCoefs CouplingLayer EverySecondChan AffineModifier Shuffle Identity InvPermute InvertibleSequential ChunkChansIn2 isinstance unfold reshape reshape fold permute Flatten2d ActNorm Unlabeled NClassIndependentDist WrapForPatches SubsampleSplitter InvertibleSequential cuda len tqdm shape append prod enumerate permute append concatenate numpy tqdm roc_auc_score concatenate | # hierarchical_anomaly_detection Pytorch implementation of the NeurIPS 2020 paper [Understanding anomaly detection with deep invertible networks through hierarchies of distributions and features](https://proceedings.neurips.cc/paper/2020/hash/f106b7f99d2cb30c3db1c3cc0fde9ccb-Abstract.html). The code allows the users to reproduce and extend the results reported in the study. Please cite the above paper when reporting, reproducing or extending the results. ## Purpose of the project This software is a research prototype, solely developed for and published as part of the publication. It will neither be maintained nor monitored in any way. ## Requirements This is a Python3 codebase. You will need some libraries: - Pytorch - ignite - tensorboardX | 1,586 |
boukhayma/3dhand | ['pose prediction'] | ['3D Hand Shape and Pose from Images in the Wild'] | tester.py segment.py datasets.py create_colored_meshes.py heat_map.py transform.py crop.py create_synthetic_data.py model.py HandTestSet get_poseweights DeconvBottleneck Bottleneck rot_pose_beta_to_mesh resnet34_Mano conv3x3 rodrigues BasicBlock ResNet_Mano inside_polygon Scale S view Variable size matmul pow sqrt numpy argwhere sum cuda Variable cuda view rodrigues get_poseweights view insert size squeeze len matmul unsqueeze repeat permute with_zeros rodrigues append range cat split ResNet_Mano Linear range len | # 3D Hand Shape and Pose from Images in the Wild Adnane Boukhayma, Rodrigo de Bem, Philip H.S. Torr. [CVPR 2019 (Oral)](https://arxiv.org/abs/1902.03451) <img src="pipeline.png" height="200"/> ## PCK curves We provide scripts and data to plot 3D & 2D PCK curves in figures 4,5,6,7,8 of the paper in directory `PCK`. We use [gnuplot](http://www.gnuplot.info/) for plotting the figures. For example: ``` cd PCK/dataset ./figx ``` | 1,587 |
bowang-lab/shape-attentive-unet | ['medical image segmentation', 'semantic segmentation'] | ['SAUNet: Shape Attentive U-Net for Interpretable Medical Image Segmentation'] | radam.py data/u.py models/mynn.py lib/utils/data/distributed.py lib/nn/modules/tests/test_numeric_batchnorm.py lib/nn/modules/tests/test_sync_batchnorm.py models/GSConv.py lib/utils/data/sampler.py loss.py lib/utils/th.py lib/nn/__init__.py train.py lib/utils/data/dataloader.py misc_functions.py models/custom_functions.py lib/utils/data/__init__.py lib/utils/__init__.py test_and_pack.py models/__init__.py lib/nn/modules/batchnorm.py lib/nn/modules/unittest.py utils.py lib/utils/data/dataset.py models/models.py data/augmentations.py lib/nn/parallel/__init__.py lib/nn/modules/replicate.py vanilla_backprop.py guided_backprop.py lib/nn/modules/__init__.py lib/nn/parallel/data_parallel.py models/attention_blocks.py smoothgrad.py AttrDict.py models/adaptive_avgmax_pool.py models/norm.py config.py data/ac17_dataloader.py lib/nn/modules/comm.py data/crop_and_pad_augmentations.py models/resnet.py data/test_loader.py AttrDict assert_and_infer_cfg GuidedBackprop dice_loss ImageBasedCrossEntropyLoss2d CrossEntropyLoss2d LabelSmoothSoftmaxCE DualLoss save_class_activation_images recreate_image save_gradient_images get_positive_negative_saliency convert_to_grayscale get_example_params format_np_output preprocess_image save_image apply_colormap_on_image AdamW RAdam PlainRAdam generate_smooth_grad undo_crop visualize_result evaluate save_as_nifti round_num main resample_to_orig eval adjust_learning_rate checkpoint main train group_weight create_optimizers process_range colorEncode AverageMeter accuracy intersectionAndUnion parse_devices unique NotSupportedCliException find_recursive VanillaBackprop AC17_2DLoad AC17Data augment_gamma PaddingCenterCropTest CenterCrop RandomCrop RandomSized AdjustGamma RandomErasing AdjustHue RandomRotate PaddingCenterCrop RandomVerticallyFlip AdjustBrightness AdjustContrast RandomHorizontallyFlip AdjustSaturation FreeScale Compose RandomTranslate ComposeTest RandomSizedCrop Scale random_crop get_lbs_for_random_crop center_crop get_lbs_for_center_crop pad_nd_image_and_seg crop AC17Test rotate_coords_3d elastic_deform_coordinates_2 pad_nd_image scale_coords center_crop_2D_image mask_random_squares resize_image_by_padding_batched random_crop_2D_image create_random_rotation create_matrix_rotation_z_3d center_crop_2D_image_batched find_entries_in_array illumination_jitter convert_seg_image_to_one_hot_encoding_batched random_crop_3D_image convert_seg_image_to_one_hot_encoding generate_elastic_transform_coordinates resize_image_by_padding transpose_channels rotate_coords_2d uniform mask_random_square create_matrix_rotation_2d generate_noise create_matrix_rotation_x_3d convert_seg_to_bounding_box_coordinates random_crop_3D_image_batched center_crop_3D_image_batched interpolate_img get_range_val general_cc_var_num_channels random_crop_2D_image_batched elastic_deform_coordinates create_matrix_rotation_y_3d uncenter_coords center_crop_3D_image resize_multichannel_image create_zero_centered_coordinate_mesh resize_segmentation _sum_ft SynchronizedBatchNorm2d _unsqueeze_ft _SynchronizedBatchNorm SynchronizedBatchNorm1d SynchronizedBatchNorm3d SyncMaster FutureResult SlavePipe execute_replication_callbacks CallbackContext DataParallelWithCallback patch_replication_callback TorchTestCase as_numpy handy_var NumericTestCase SyncTestCase handy_var _find_bn _async_copy_stream UserScatteredDataParallel dict_gather _async_copy DictGatherDataParallel async_copy_to _get_stream user_scattered_collate mark_volatile as_variable as_numpy DataLoaderIter _set_SIGCHLD_handler default_collate _worker_manager_loop DataLoader _worker_loop ExceptionWrapper pin_memory_batch random_split ConcatDataset Subset TensorDataset Dataset DistributedSampler SubsetRandomSampler WeightedRandomSampler RandomSampler BatchSampler SequentialSampler Sampler AdaptiveAvgMaxPool2d adaptive_avgmax_pool2d pooling_factor conv1x1 SEResNetBottleneck SEBottleneck SpatialAttentionBlock _MRF convbnrelu Bottleneck conv3x3 batchnorm DualAttBlock SEModule conv2d_same numerical_gradients_2d gradient_central_diff calc_pad_same convTri compute_normal compute_normal_2 compute_grad_mag compute_single_sided_diferences t GatedSpatialConv2d HighFrequencyGatedSpatialConv2d Conv2dPad CenterBlock SegmentationModule ConvRelu SAUNet conv1x1_bn_relu SegmentationModuleBase conv3x3 conv3x3_bn_relu ModelBuilder SkipConv DecoderBlock initialize_weights Norm2d initialize_weights Norm2d ResNet resnet50 Bottleneck load_url conv3x3 resnet18 BasicBlock resnet101 batch_weighting syncbn print immutable BatchNorm2d sum tuple sigmoid mean unsqueeze softmax float type range cat ndimension percentile sum min expand_dims abs clip join save_image min makedirs join save_image apply_colormap_on_image makedirs fromarray uint8 color_map new size astype alpha_composite copy convert get_cmap uint8 transpose astype repeat expand_dims fromarray save isinstance format_np_output unsqueeze_ Variable transpose thumbnail float32 float enumerate transpose round range copy max maximum preprocess_image convert alexnet generate_gradients Variable normal_ item zeros range fromarray int uint8 size crop astype round_num max zeros_like shape resize range undo_crop join uint8 imwrite min astype result max join str print Nifti1Image eye to_filename update visualize zeros_like visualize_result synchronize AverageMeter perf_counter tqdm eval save_as_nifti save_test_path cpu cuda range resample_to_orig AC17 SegmentationModule evaluate ComposeTest print set_device DataLoader ModelBuilder cuda build_unet DualLoss gpu update format num_class as_numpy print synchronize AverageMeter intersectionAndUnion average sum cuda enumerate zero_grad num_epoch lr_encoder adjust_learning_rate cuda running_lr_decoder running_lr_encoder lr_pow append update format param_groups mean item float epoch_iters time backward print max_iters AverageMeter average step segmentation_module format ckpt print save state_dict _ConvNd isinstance bias _BatchNorm modules append weight Linear RAdam Adam group_weight SGD running_lr_encoder param_groups cos lr_encoder num_epoch patch_replication_callback num_epoch UserScatteredDataParallel load2D range Compose start_epoch eval checkpoint create_optimizers named_parameters train append join filter walk concatenate cumsum sort flatten argsort shape nonzero empty zeros astype unique float sum histogram copy list map strip groups match func append split std min mean uniform power float max range append randint range len append range len dtype list get_lbs_for_random_crop tuple shape pad any get_lbs_for_center_crop zeros range len pad_nd_image tuple random append meshgrid range gaussian_filter len tuple astype range len list shape unique zeros enumerate list unique zeros range enumerate random append array range gaussian_filter len random array append abs max range gaussian_filter len create_matrix_rotation_z_3d reshape identity create_matrix_rotation_y_3d shape create_matrix_rotation_x_3d len reshape create_matrix_rotation_2d shape deepcopy range dtype astype map_coordinates unique zeros enumerate random gaussian_filter arange array zeros max len shape array len shape array len shape array len shape array len shape randint len shape randint len shape randint len shape randint len list ones tuple reshape shape array max list ones tuple reshape array max array array array array normal dot shape array range max deepcopy sum grey_dilation tuple min power sqrt any gaussian_gradient_magnitude append zeros abs array range gaussian_filter len pop int max lb astype extend copy argwhere append array range enumerate dtype astype unique resize zeros enumerate dtype list astype resize zeros range isinstance orig_type uniform normalvariate type list shape pad array range len randint get_range_val range copy range mask_random_square list hasattr __data_parallel_replicate__ modules enumerate len replicate data isinstance size sum modules isinstance is_tensor record_stream isinstance Sequence Mapping cuda zip len zip len device_count Stream Mapping Sequence isinstance is_tensor Mapping Sequence is_tensor isinstance Sequence Variable Mapping seed init_fn get set_num_threads _set_worker_signal_handlers collate_fn manual_seed get isinstance set_device put pin_memory_batch sum list isinstance Sequence new Mapping type zip _new_shared is_tensor Mapping is_tensor isinstance Sequence SIGCHLD signal getsignal randperm sum print max_pool2d avg_pool2d cat shape pad conv2d calc_pad_same conv2d_same shape repeat Tensor cuda clone shape gradient_central_diff list reversed shape pad conv2d repeat Tensor cuda range cat remainder print numerical_gradients_2d set_trace pi sign convTri atan remainder print numerical_gradients_2d set_trace pi sign convTri atan mul numerical_gradients_2d convTri sqrt max show normal print imshow float GatedSpatialConv2d gconv MODEL getattr layer fill_ isinstance modules zero_ BatchNorm2d weight kaiming_normal load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict join format urlretrieve write makedirs | Code for our paper "SAUNet: Shape Attentive U-Net for Interpretable Medical Image Segmentation". https://arxiv.org/pdf/2001.07645.pdf
(January 21, 2020) Notice: we are currently working on cleaning up the code and fixing instabilities. We will provide further documentation and updates here in the near future.
If you find our work helpful, please consider citing our work:
```
@misc{sun2020saunet,
title={SAUNet: Shape Attentive U-Net for Interpretable Medical Image Segmentation},
author={Jesse Sun and Fatemeh Darbeha and Mark Zaidi and Bo Wang},
| 1,588 |
bowenc0221/panoptic-deeplab | ['panoptic segmentation', 'instance segmentation', 'semantic segmentation'] | ['Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation', 'Panoptic-DeepLab'] | segmentation/model/meta_arch/base.py segmentation/data/transforms/__init__.py segmentation/data/transforms/build.py segmentation/data/transforms/transforms.py segmentation/data/datasets/cityscapes.py segmentation/config/default.py segmentation/utils/comm.py segmentation/data/samplers/__init__.py segmentation/model/meta_arch/deeplabv3.py segmentation/data/datasets/cityscapes_panoptic.py segmentation/evaluation/__init__.py segmentation/data/transforms/target_transforms.py segmentation/utils/flow_vis.py tools_d2/d2/backbone.py segmentation/data/datasets/utils.py segmentation/utils/__init__.py segmentation/utils/save_annotation.py segmentation/evaluation/coco_instance.py segmentation/model/loss/__init__.py segmentation/evaluation/coco_panoptic.py tools_d2/train_deeplab.py tools/demo.py tools_d2/convert-pretrain-model-to-d2.py segmentation/model/decoder/conv_module.py segmentation/data/datasets/base_dataset.py tools_d2/d2/predictor.py tools_d2/_init_paths.py segmentation/solver/utils.py segmentation/evaluation/semantic.py segmentation/model/post_processing/instance_post_processing.py segmentation/evaluation/instance.py segmentation/model/decoder/deeplabv3.py segmentation/solver/__init__.py segmentation/utils/utils.py tools_d2/train_panoptic_deeplab.py datasets/prepare_coco_panoptic_trainid.py segmentation/model/meta_arch/panoptic_deeplab.py segmentation/model/post_processing/evaluation_format.py segmentation/solver/build.py segmentation/model/backbone/xception.py segmentation/utils/debug.py segmentation/evaluation/panoptic.py tools/train_net.py segmentation/model/backbone/__init__.py segmentation/model/meta_arch/__init__.py segmentation/model/__init__.py segmentation/model/meta_arch/deeplabv3plus.py segmentation/config/hrnet_config.py segmentation/data/datasets/__init__.py segmentation/utils/logger.py segmentation/model/post_processing/__init__.py tools/_init_paths.py tools/test_net_single_core.py segmentation/data/__init__.py segmentation/model/decoder/aspp.py segmentation/config/__init__.py segmentation/data/samplers/distributed_sampler.py segmentation/model/backbone/hrnet.py segmentation/model/backbone/mobilenet.py segmentation/data/transforms/pre_augmentation_transforms.py segmentation/model/post_processing/semantic_post_processing.py segmentation/solver/lr_scheduler.py segmentation/utils/test_utils.py tools_d2/d2/__init__.py tools_d2/demo.py segmentation/model/build.py segmentation/model/loss/criterion.py segmentation/model/decoder/__init__.py segmentation/utils/env.py segmentation/data/build.py segmentation/model/backbone/resnet.py segmentation/model/decoder/panoptic_deeplab.py segmentation/model/decoder/deeplabv3plus.py segmentation/model/backbone/mnasnet.py segmentation/data/datasets/coco_panoptic.py convert_to_trainid update_config build_dataset_from_cfg worker_init_reset_seed build_test_loader_from_cfg build_train_loader_from_cfg BaseDataset Cityscapes CityscapesPanoptic COCOPanoptic InferenceSampler TrainingSampler build_transforms Resize SemanticTargetGenerator PanopticTargetGenerator ToTensor Compose RandomCrop Normalize RandomHorizontalFlip RandomScale COCOInstanceEvaluator COCOPanopticEvaluator _print_panoptic_results CityscapesInstanceEvaluator CityscapesPanopticEvaluator SemanticEvaluator build_loss_from_cfg build_segmentation_model_from_cfg hrnet48 hrnet18 hrnet32 conv1x1 _hrnet Bottleneck HighResolutionModule HighResolutionNet conv3x3 BasicBlock _load_pretrained _InvertedResidual mnasnet0_5 _get_depths mnasnet1_0 _round_to_multiple_of mnasnet1_3 mnasnet0_75 MNASNet _stack InvertedResidual _make_divisible ConvBNReLU mobilenet_v2 MobileNetV2 conv1x1 resnext50_32x4d wide_resnet50_2 ResNet resnet50 resnext101_32x8d Bottleneck resnet152 wide_resnet101_2 conv3x3 _resnet resnet34 resnet18 BasicBlock resnet101 XceptionBlock xception65 Xception65 SeparableConv2d ASPPPooling ASPPConv ASPP stacked_conv depthwise_separable_conv basic_conv DeepLabV3Decoder DeepLabV3PlusDecoder SinglePanopticDeepLabDecoder SinglePanopticDeepLabHead PanopticDeepLabDecoder OhemCE DeepLabCE RegularCE BaseSegmentationModel DeepLabV3 DeepLabV3Plus PanopticDeepLab get_cityscapes_instance_format group_pixels get_panoptic_segmentation get_instance_segmentation merge_semantic_and_instance find_instance_center get_semantic_segmentation build_lr_scheduler _generate_optimizer_class_with_gradient_clipping build_optimizer _create_gradient_clipper maybe_add_gradient_clipping GradientClipType WarmupMultiStepLR WarmupCosineLR _get_warmup_factor_at_iter WarmupPolyLR get_lr_group_id get_local_size synchronize get_world_size get_local_rank reduce_dict _get_global_gloo_group shared_random_seed all_gather get_rank gather _serialize_to_tensor is_main_process _pad_to_largest_tensor save_debug_images seed_all_rng make_colorwheel flow_to_color flow_compute_color _cached_log_stream log_first_n _find_caller setup_logger log_every_n create_small_table _ColorfulFormatter log_every_n_seconds random_color save_heatmap_and_center_image save_annotation save_offset_image save_heatmap_image save_panoptic_annotation label_to_color_image save_instance_annotation save_center_image multi_scale_inference flip_tensor upsample_predictions get_loss_info_str get_module AverageMeter to_cuda CityscapesMeta parse_args main read_image main parse_args main parse_args add_path get_parser setup_cfg main build_sem_seg_train_aug setup Trainer main build_sem_seg_train_aug setup Trainer add_path d2_xception_65 d2_hrnet VisualizationDemo AsyncPredictor pop deepcopy format print append merge_from_file merge_from_list cfg defrost opts freeze format TrainingSampler getLogger IMS_PER_BATCH get_world_size build_dataset_from_cfg DataLoader SAMPLER_TRAIN BatchSampler info len build_dataset_from_cfg DataLoader BatchSampler InferenceSampler len randint seed_all_rng max_scale crop_h scale_step_size Compose mean pad_value crop_w std min_scale label_pad_value append tabulate info BatchNorm2d BN_MOMENTUM modules isinstance get int HighResolutionNet load_state_dict info load_state_dict_from_url append _InvertedResidual range int max load_state_dict_from_url load_state_dict _load_pretrained MNASNet _load_pretrained MNASNet _load_pretrained MNASNet _load_pretrained MNASNet int max load_state_dict_from_url load_state_dict MobileNetV2 ResNet load_state_dict load_state_dict_from_url load_state_dict_from_url load_state_dict Xception65 append BatchNorm2d ReLU Conv2d append BatchNorm2d extend ReLU append conv partial range where OrderedDict mean unique append array topk threshold squeeze flatten nonzero max_pool2d norm reshape squeeze transpose unsqueeze repeat cat group_pixels zeros_like find_instance_center view zeros_like size unique mode merge_semantic_and_instance get_instance_segmentation get_semantic_segmentation squeeze clone type __name__ _create_gradient_clipper type CLIP_GRADIENTS _generate_optimizer_class_with_gradient_clipping WEIGHT_DECAY_NORM isinstance WEIGHT_DECAY_BIAS Adam SGD add named_parameters BASE_LR modules maybe_add_gradient_clipping BIAS_LR_FACTOR WEIGHT_DECAY LR_SCHEDULER_NAME param_groups max Counter enumerate barrier get_world_size format getLogger from_buffer dumps get_rank warning get_backend device to len get_world_size all_gather tensor max zeros cat _serialize_to_tensor _get_global_gloo_group loads zip append max _pad_to_largest_tensor max zip _get_global_gloo_group loads get_rank append _serialize_to_tensor _pad_to_largest_tensor randint all_gather get_world_size reverse_transform argmax max exists clip fromarray flow_compute_color range detach size astype label_to_color_image tile remove reshape create_label_colormap zeros numpy len seed int set_rng_state format from_bytes get_state getLogger strftime getpid info urandom zeros floor arange uint8 arctan2 astype square pi sqrt floor int32 make_colorwheel zeros range square sqrt max clip setFormatter join format _cached_log_stream getLogger addHandler StreamHandler Formatter mkdirs _ColorfulFormatter dirname colored DEBUG setLevel f_back _getframe f_code _find_caller log isinstance _find_caller log get time _find_caller log tuple tabulate zip randint len fromarray astype label_to_color_image amin amax random_color fromarray astype unique zeros array range len fromarray tuple astype add set unique zeros _random_color fromarray ellipse Draw astype enumerate fromarray reshape astype clip fromarray ellipse Draw reshape astype clip enumerate fromarray flow_compute_color astype OrderedDict list keys interpolate model list MEAN len shape upsample_predictions DILATION to sum CROP_SIZE range Compose astype FLIP_TEST flip_tensor set_image_pooling softmax float keys int int32 zeros transforms SCALE_LIST list keys list to keys range len update_config add_argument ArgumentParser exif_transpose asarray convert expand_dims open getLogger build_test_loader_from_cfg input_files output_dir device dataset OUTPUT_DIR exists list mkdirs load_state_dict append parse_args to extension format glob Compose setup_logger ENABLED eval pformat info load join isinstance MODEL_FILE DETERMINISTIC AverageMeter build_segmentation_model_from_cfg isfile GPUS BENCHMARK warning DEBUG CityscapesInstanceEvaluator COCOPanopticEvaluator CityscapesPanopticEvaluator DILATION sum CROP_SIZE EVAL_PANOPTIC set_image_pooling EVAL_INSTANCE float int EVAL_FOREGROUND COCOInstanceEvaluator SemanticEvaluator to_cuda model MAX_ITER zero_grad WEIGHTS DistributedDataParallel get_lr_group_id save build_lr_scheduler set_device build_optimizer step build_train_loader_from_cfg save_debug_images iter get_rank RESUME next range is_main_process update init_process_group size get_world_size item time pop backward convert_sync_batchnorm local_rank insert merge_from_file confidence_threshold config_file get_cfg merge_from_list add_panoptic_deeplab_config opts freeze add_argument ArgumentParser IGNORE_VALUE RandomFlip TYPE RandomCrop_CategoryAreaConstraint ENABLED append SINGLE_CATEGORY_MAX_AREA SIZE merge_from_file add_deeplab_config config_file get_cfg merge_from_list default_setup opts freeze setup resume_or_load build_model test Trainer eval_only RandomCrop add_panoptic_deeplab_config | # Panoptic-DeepLab (CVPR 2020) Panoptic-DeepLab is a state-of-the-art bottom-up method for panoptic segmentation, where the goal is to assign semantic labels (e.g., person, dog, cat and so on) to every pixel in the input image as well as instance labels (e.g. an id of 1, 2, 3, etc) to pixels belonging to thing classes.  This is the **PyTorch re-implementation** of our CVPR2020 paper based on Detectron2: [Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation](https://arxiv.org/abs/1911.10194). Segmentation models with DeepLabV3 and DeepLabV3+ are also supported in this repo now! ## News * [2021/01/25] Found a bug in old config files for COCO experiments (need to change `MAX_SIZE_TRAIN` from 640 to 960 for COCO). Now we have also reproduced COCO results (35.5 PQ)! | 1,589 |
boyu-zhang-25/DropConnect_DPP | ['network pruning'] | ['Statistical Mechanical Analysis of Neural Network Pruning'] | dpp_sample_ts.py teacher_dataset.py CIFAR_dpp/CIFAR.py mnist_dpp/dpp_sample.py plot_all_edges.py loss_stat.py calc_acc.py cal_param_ts.py mnist_dpp/cal_param_MNIST.py mnist_dpp/MNIST.py teacher_student.py CIFAR_dpp/cal_param_CIFAR.py evaluate.py plot_loss.py CIFAR_dpp/dpp_sample.py f4 err f2 f1 f3 calculate_param create_weight create_kernel_ts sample_dpp_multiple_ts reweight_edge dpp_sample_edge_ts create_edge_kernel_ts dpp_sample_node_ts reweight_node plot_R get_cube_Q plot_cube_Q plot_Q get_Q main get_R main valid_choice main Teacher_dataset teacher_predict get_masks test student_MLP main train cal_param_CIFAR MLP test prune_MLP main train create_weight create_kernel sample_dpp create_edge_kernel dpp_sample_edge sigmoid reweight_edge dpp_sample_node reweight_node cal_param_MNIST create_weight create_kernel sample_dpp create_edge_kernel dpp_sample_edge sigmoid reweight_edge dpp_sample_node reweight_node MLP test prune_MLP main train print sample_exact_k_dpp FiniteDPP range print shape format print create_weight create_kernel_ts append load sample_dpp_multiple_ts print len append create_edge_kernel_ts range enumerate open create_kernel_ts sample_dpp_multiple_ts ones dot range coef_ copy dot sqrt LinearRegression range coef_ dot zeros LinearRegression enumerate load T print dot open zeros numpy len subplots tight_layout colorbar imshow savefig figure abs load T dot numpy open str subplots tight_layout imshow savefig abs load T print dot open zeros numpy len subplots tight_layout colorbar imshow savefig figure abs int plot_R path_to_student_mask print add_argument plot_Q shape input_dim ArgumentParser path_to_teacher get_Q parse_args range get_R append len subplots teacher_h_size flatten abs str list imshow savefig pruning_choice student_h_size enumerate dot sqrt numpy dump teacher_path Teacher_dataset open requires_grad format view criterion model backward print zero_grad grad sqrt input_dim step range len get_test_example view criterion model eval range data T print ones shuffle range dpp_sample_node_ts sqrt shape input_dim append normalize float numpy zeros enumerate SGD device save seed MSELoss to state_dict epoch eval manual_seed is_available load train print dataset item argmax enumerate format print dataset len sum T print ones abs dpp_sample_edge shuffle range from_numpy shape reweight_edge zeros dpp_sample_node numpy reweight_node enumerate save_model DataLoader DataParallel dataset probability next CrossEntropyLoss format Compose test CIFAR10 reweighting parameters epochs train_batch_size sample_exact_k_dpp FiniteDPP create_kernel print append float range load format sample_dpp print create_edge_kernel len shape split append zeros range enumerate open load create_kernel format sample_dpp print ones dot shape open zeros split copy sigmoid print binomial float MNIST | # Statistical Mechanical Analysis of Neural Network Pruning Code for our paper: >Rupam Acharyya, Ankani Chattoraj\*, **Boyu Zhang**\*, Shouman Das, and Daniel Stefankovic. "Statistical Mechanical Analysis of Neural Network Pruning." In: _37th Conference on Uncertainty in Artificial Intelligence (UAI 2021)_. 2021. You can read the preprint [here](https://proceedings.mlr.press/v161/acharyya21a.html). ## Citation ``` @inproceedings{acharyya2021statistical, title={Statistical mechanical analysis of neural network pruning}, author={Acharyya, Rupam and Chattoraj, Ankani and Zhang, Boyu and Das, Shouman and {\v{S}}tefankovi{\v{c}}, Daniel}, | 1,590 |
bplank/bleaching-text | ['gender prediction'] | ['Bleaching Text: Abstract Features for Cross-lingual Gender Prediction'] | scripts/5.feats.find.py scripts/9.smallTable.py scripts/4.embeds.prep.py scripts/6.humans.featurize.py scripts/1.ngramTuning.graph.py scripts/9.bigTable.py scripts/0.genFeatures.py scripts/bleaching.py scripts/0.featurize.py scripts/6.humans.run.py scripts/0.concatJson.py src/classifier.py scripts/1.ngramTuning.run.py scripts/3.4-1.run.py scripts/2.lexVScomb.run.py scripts/6.humans.to20.py scripts/3.4-1.prep.py scripts/0.combineJson.py scripts/9.fleiss.py scripts/4.embeds.run.py scripts/9.humanTable.py scripts/0.csv2json.py src/myutils.py scripts/6.humans.filter.py shape punctCons punctAgr getScore fleiss_kappa getHumanScore getScore getScore initFreq length punctCons vowels bleachAll shape punctAgr lex bleachText frequency analyze train_eval get_majority_baseline vectorize_data load_embeddings main extract_feats Featurizer get_size_tuple EmbedsFeaturizer islower isdigit isupper range len startswith open shape float sum load len range open update str int list items Counter log isalpha print split StratifiedKFold ArgumentParser get_majority_baseline open analyze_features list array append parse_args dump format test mean pred load analyze items train_eval print add_argument vectorize_data std split format print classification_report LinearSVC classes_ accuracy_score predict fit print format load data EmbedsFeaturizer print Featurizer embeds show_instance test load_embeddings TfidfTransformer tf_idf transform fit_transform array DictVectorizer open analyze_features data str basename describe print fit LinearSVC write vectorize_data len dict classes_ dirname zip open extract_feats makedirs get join format replace print n_gram get_size_tuple c_n_gram lower split zip range rm_uu ngrams format print split open len int split | # bleaching-text Required python packages: ``` emoji (we used version 0.4.5) sklearn (we used version 0.19.1) numpy (we used version 1.14.2) ``` To reproduce results from the paper (note that they are also already included in the runs folder): ``` ./scripts/runAll.sh | 1,591 |
bpucla/latent-space-EBM-prior | ['text generation', 'anomaly detection'] | ['Learning Latent Space Energy-Based Prior Model'] | train_celeba_128.py train_svhn.py pygrid.py fid_v2_tf_cpu.py fid_score is_local InvalidFIDException calculate_activation_statistics get_activations check_or_download_inception create_inception_graph _get_inception_layer calculate_frechet_distance free_device is_array update_job_status write_opts read_opts fill_queue init_mp cast_str is_bool run_job reset_job_status is_float run_jobs allocate_device setup_logging_file copy_source overwrite_opt FileHandler setup_logging is_int get_exp_id update_job_result_file get_output_dir merge_dicts compute_fid compute_fid_nchw set_gpu get_grad_norm Swish print_gpus set_seed _netE update_job_result get_dataset create_args_grid _netWrapper parse_args get_free_gpu AttrDict set_cuda plot_stats is_xsede weights_init_xavier to_named_dict get_activation copy_source GELU main setup_logging makedirs_exp get_exp_id Mish _netG train get_output_dir merge_dicts compute_fid compute_fid_nchw set_gpu get_grad_norm Swish print_gpus set_seed _netE update_job_result get_dataset create_args_grid parse_args get_free_gpu AttrDict set_cuda plot_stats is_xsede weights_init_xavier to_named_dict get_activation copy_source GELU main setup_logging makedirs_exp get_exp_id Mish _netG train get_output_dir get_shape TensorShape get_operations outputs append get_tensor_by_name enumerate print reshape _get_inception_layer empty range run atleast_2d print iscomplexobj atleast_1d dot sqrtm trace eye real abs max imag mean cov get_activations print is_local get_file set_start_method join basename copyfile stdout setFormatter getLogger addHandler StreamHandler Formatter setLevel INFO FileHandler stdout join setFormatter getLogger addHandler StreamHandler Formatter setLevel INFO FileHandler join strftime makedirs acquire acquire next acquire write_opts read_opts update_job_result acquire next write_opts read_opts join str format Process makedirs dict Manager start allocate_device acquire info merge len format info read_opts int float is_float is_bool is_array is_int items list setattr writer writerow f reader f cast_str next enumerate set_device format add_argument ArgumentParser product enumerate format CelebA SVHN SingleImagesFolderMTDataset dataset fill_ normal_ xavier_normal_ weight __name__ get_fid zero_grad DataLoader DataParallel save device seed set_seed Adam get_dataset MSELoss gpu_multi load_state_dict append parse_args to module set_cuda range detach plot_stats format inf e_gamma sample_p_0 to_named_dict mse mean train_flag info item n_epochs g_gamma overwrite_opt net setup_logging load int enumerate makedirs_exp ExponentialLR load_ckpt backward Variable parameters netG step len fid_score to_nhwc compute_fid items join subplots plot close set_ylabel savefig keys enumerate len norm print readlines system print max argmax system seed str manual_seed_all manual_seed is_available is_available AttrDict keys values zip update makedirs print_gpus get_exp_id copy_source train get_output_dir e_max_norm clip_grad_norm_ set_gpu get_grad_norm e_is_grad_clamp _netE apply shape g_max_norm sample_langevin_prior_z clip_grad_norm g_is_grad_clamp sample_langevin_post_z _netG numpy array | # Learning Latent Space Energy-Based Prior Model Code for reproducing reported results in Learning Latent Space Energy-Based Prior Model (https://arxiv.org/pdf/2006.08205.pdf) ## Requirements: ``` tensorflow-gpu=1.14.0 torchvision=0.5.0 pytorch=1.4.0 scipy=1.1.0 scikit-learn=0.21.2 Pillow=6.2.0 | 1,592 |
braceal/DeepDriveMD | ['protein folding'] | ['DeepDriveMD: Deep-Learning Driven Adaptive Molecular Simulations for Protein Folding'] | deepdrive/preproc/__init__.py deepdrive/__init__.py deepdrive/taskmanager.py examples/cvae_dbscan/scripts/contact_map.py examples/cvae_dbscan/taskmanagers/ml.py deepdrive/md/__init__.py examples/cvae_dbscan/scripts/cvae.py examples/cvae_dbscan/scripts/md.py deepdrive/utils/utils.py deepdrive/preproc/preproc.py examples/cvae_dbscan/taskmanagers/md.py examples/cvae_dbscan/taskmanagers/outlier.py deepdrive/md/md.py deepdrive/deepdrive.py deepdrive/utils/validators.py examples/cvae_dbscan/taskmanagers/preprocess.py deepdrive/utils/__init__.py setup.py examples/cvae_dbscan/scripts/dbscan.py tests/test_get_id.py examples/cvae_dbscan/cvae_dbscan_pipeline.py DeepDriveMD TaskManager openmm_simulate_amber_fs_pep cm_to_cvae get_id validate_between_zero_and_one validate_positive main main main main perform_clustering write_rewarded_pdbs generate_embeddings MDTaskManager CVAETaskManager OPTICSTaskManager ContactMatrixTaskManager TestGetId Simulation CheckpointReporter setVelocitiesToTemperature ContactMapReporter kelvin picoseconds get_coordinates Platform_getPlatformByName DCDReporter StateDataReporter LangevinIntegrator createSystem append loadCheckpoint minimizeEnergy setPositions choice ForceField picosecond femtoseconds int topology setConstraintTolerance randint load_file step list T pad_f reshape hstack map shape pad append array join getcwd DeepDriveMD copyfile run range makedirs glob sorted str EncoderHyperparams int train shuffle RMSprop DecoderConvolution2D VAE LossHistory save_weights EmbeddingCallback save EncoderConvolution2D DecoderHyperparams get_final_conv_params len load get dbscan_clustering join sorted glob Universe len write_rewarded_pdbs optics_clustering min get_id generate_embeddings | # DeepDriveMD Deep-Learning Driven Adaptive Molecular Simulations for Protein Folding --- # Important!! This repository is out of date. Please see our latest version [here](https://github.com/DeepDriveMD/DeepDriveMD-pipeline) or refer to our [website](https://deepdrivemd.github.io/). Thank you! --- Implementation of: https://arxiv.org/pdf/1909.07817.pdf Project location: https://github.com/braceal/DeepDriveMD.git # Instructions Initial setup on Summit https://www.olcf.ornl.gov/summit/ | 1,593 |
brade31919/radar_depth | ['depth estimation'] | ['Depth Estimation from Monocular Images and Sparse Radar Data'] | config/config_nuscenes.py dataset/transforms.py model/multistage_model.py dataset/nuscenes_export.py evaluation/metrics.py model/models.py dataset/nuscenes_dataset_torch_new.py dataset/nuscenes_dataset.py dataset/radar_preprocessing.py evaluation/criteria.py misc/devkit/python/read_depth.py utils.py evaluation/criteria_new.py main.py dataset/dense_to_sparse.py model/utils.py record_scalar_summary validate create_model record_test_scalar_summary customized_collate create_data_loaders main train display_results merge_into_row_with_gt colored_depthmap parse_command adjust_learning_rate_new adjust_learning_rate save_checkpoint get_output_directory save_image add_row merge_into_row config_nuscenes DenseToSparse UniformSampling LidarRadarSampling Nuscenes_dataset nuscenes_dataset_torch create_h5_file export_dataset parse_h5_file select_topk_depth check_valid_depth sid_dist_thresh filter_radar_points_classify plot_valid_labels plot_radar_depth sid_depth_thresh plot_lidar_depth filter_radar_points_gt filter_radar_points_analysis Rotate CenterCrop normalization_imagenet ToTensor denormalization_imagenet HorizontalFlip NormalizeTensor Resize NormalizeNumpyArray _is_tensor_image adjust_gamma adjust_saturation _is_numpy_image Crop denormalization_batch Lambda Compose Colorjitter2 adjust_hue adjust_brightness _is_pil_image adjust_contrast ColorJitter MaskedL1Loss MaskedBerHuLoss MaskedMSELoss SmoothnessLoss MaskedBerHuLoss MaskedMSELoss MaskedL1Loss MaskedCrossEntropyLoss Result AverageMeter_multidist AverageMeter log10 Result_multidist depth_read Unpool weights_init_kaiming_leaky ResNet_latefusion conv_bn_relu ResNet2 ResNet_multifusion ResNet DeConv Decoder UpConv choose_decoder ResNet_pnp weights_init UpProj BasicBlock weights_init_kaiming ResNet_latefusion2 ResNet_multistage Filter_layer Result AverageMeter default_collate print nuscenes_dataset_torch DataLoader validation Parameter ResNet_latefusion ResNet_multistage ResNet2 print ResNet register_parameter tensor modality len validate SGD adjust_learning_rate save_checkpoint abspath save_image cuda str load_state_dict dirname range SummaryWriter format create_model param_groups resume lr float enumerate validation load join evaluate add_scalar print parameters create_data_loaders get_output_directory train epochs makedirs data model zero_grad numpy exp len gpu_time update Result format synchronize size item record_scalar_summary add_image enumerate time make_grid evaluate backward print AverageMeter average cpu step data_time add_scalar data save_image merge_into_row str list record_test_scalar_summary create_group update Result format add_row synchronize size close eval keys enumerate join time items merge_into_row_with_gt isinstance evaluate print AverageMeter File average create_dataset numpy len delta1 delta2 mae average rmse delta3 absrel add_scalar lg10 delta1 delta2 rmse delta3 absrel add_scalar average print print add_argument ArgumentParser names parse_args set_defaults copyfile join save str param_groups param_groups data join format decoder criterion batch_size num_samples sparsifier pretrained lr arch modality min max colored_depthmap transpose squeeze min hstack numpy max colored_depthmap transpose squeeze min hstack numpy max fromarray astype save join AttrDict int16 astype join list create_h5_file len export_name EXPORT_ROOT get_data tqdm range makedirs exp log exp log append range int16 astype sid_depth_thresh zeros sum range select_topk_depth check_valid_depth sid_dist_thresh transpose squeeze sqrt expand_dims sum select_topk_depth check_valid_depth sid_dist_thresh transpose squeeze sqrt expand_dims sum select_topk_depth check_valid_depth sid_dist_thresh transpose squeeze sqrt expand_dims sum show colorbar imshow scatter figure gca show colorbar imshow scatter figure gca show colorbar imshow scatter figure gca Brightness enhance enhance Contrast Color enhance fromarray convert mode array split uint8 convert array clip mode Normalize Normalize append unsqueeze denormalization_imagenet range astype array float32 open fill_ isinstance out_channels in_channels Conv2d normal_ sqrt zero_ BatchNorm2d ConvTranspose2d isinstance Conv2d bias weight kaiming_normal_ zero_ ConvTranspose2d constant_ isinstance Conv2d bias weight kaiming_normal_ zero_ ConvTranspose2d constant_ int Sequential Conv2d modules weights_init append BatchNorm2d LeakyReLU | # Depth Estimation from Monocular Images and Sparse Radar Data This is the official implementation of the paper [Depth Estimation from Monocular Images and Sparse Radar Data](https://arxiv.org/abs/2010.00058). In this repo, we provide code for dataset preprocessing, training, and evaluation. Some parts of the implementation are adapted from [sparse-to-dense](https://github.com/fangchangma/sparse-to-dense.pytorch). We thank the authors for sharing their implementation. ## Updates - [x] Training and evaluation code. - [x] Trained models. - [x] Download instructions for the processed dataset. - [ ] Detailed documentation for the processed dataset. - [ ] Code and instructions to process data from the official nuScenes dataset. ## Installation | 1,594 |
brain-research/ncp | ['active learning'] | ['Noise Contrastive Priors for Functional Uncertainty'] | ncp/models/__init__.py ncp/scripts/toy_active.py ncp/datasets/toy.py ncp/models/bbb_ncp.py ncp/tools/__init__.py ncp/tools/attrdict.py ncp/tools/training.py ncp/models/det.py ncp/models/bbb.py ncp/models/det_mix_ncp.py ncp/scripts/flights_active.py ncp/tools/plotting.py ncp/datasets/__init__.py ncp/datasets/numpy_dataset.py load_numpy_dataset generate_vargrad_dataset network define_graph network define_graph network define_graph network define_graph default_schedule main plot_results default_config default_schedule main plot_results default_config AttrDict plot_likelihood visualize_model plot_prediction plot_std_area plot_distance load_results select_next_target run_experiment evaluate_model expanduser AttrDict RandomState normal RandomState astype maximum repeat linspace AttrDict leaky_relu softplus zeros_like Normal weight_std kl_divergence get_variable Independent Deterministic matmul random_normal_initializer astype mean sqrt constant_initializer dense REGULARIZATION_LOSSES variance float32 add_to_collection layer_sizes to_float make_template list learning_rate clip_gradient clip_by_global_norm float32 placeholder mean AdamOptimizer network_tpl apply_gradients int32 zip stddev sum sg noise_std Normal ood_std_prior stop_gradient kl_divergence random_normal center_at_target shape append ncp_scale sigmoid list AttrDict range AttrDict plot_likelihood join subplots set_title set_xlabel set_ylim tight_layout savefig legend logdir plot_distance run_experiment plot_results set_random_seed test_amount reset_default_graph dataset exists list logdir range define_graph format filterwarnings product seeds resume MakeDirs load_numpy_dataset replot train_amount join print set_major_locator MaxNLocator generate_vargrad_dataset __setitem__ __getitem__ subplots set_ticks max log run set_title make_axes_locatable append_axes ones domain scatter savefig plot_std_area append range format concatenate set_xlim close tight_layout targets enumerate min set_ylabel plot_prediction array set_ylim len set_major_locator MaxNLocator plot tick_right set_xlim min argsort fill_between max set_label_coords get set_label_coords set_major_locator MaxNLocator tick_right set_xlim min argsort fill_between max set_ylim items list partial arange plot text set_xlim mean axhline set_major_formatter startswith fill_between std FuncFormatter enumerate items list partial arange plot text set_xlim mean axhline set_major_formatter startswith fill_between std FuncFormatter enumerate join RandomState tolist MakeDirs expanduser ConfigProto AttrDict savez_compressed len mean sqrt logpdf append range log run exp concatenate inputs log mean run append range len load items list defaultdict Glob array append keys | # Noise Contrastive Priors This project contains the source code for Noise Contrastive Priors (NCP) by Danijar Hafner, Dustin Tran, Alex Irpan, Timothy Lillicrap, James Davidson. ## Running the code Install dependencies: ```sh pip3 install numpy tensorflow tensorflow_probability matplotlib ruamel.yaml ``` Active learning on the toy data set: ```sh | 1,595 |
braincorp/ASC | ['visual tracking'] | ['Fundamental principles of cortical computation: unsupervised learning with prediction, compression and feedback'] | ASC/image_sparser.py ASC/frame_reader.py setup.py ASC/__init__.py ASC/STA_STC.py ASC/sparse_manager.py ASC/train_sparse_coding.py FrameReader ImageSparser SparseManager run show str FrameReader read SparseManager feed argsort create_all save K range | # Adaptive Sparse Coding Cortical Model ## Introduction This repository contains the core code used for the model published in the paper "Fundamental principles of cortical computation: unsupervised learning with prediction, compression and feedback". ## Starting up Make sure that virtualenv is installed on your system first. Then clone the repo and run the following commands: ``` git clone [email protected]:braincorp/ASC cd ASC sudo ./install_ubuntu_dependencies.sh | 1,596 |
braincorp/PVM | ['visual tracking', 'common sense reasoning'] | ['Unsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network'] | PVM_models/PVM_unit_v1.py PVM_tools/abstract_tracker.py PVM_tests/test_SyncUtils.py PVM_framework/PVM_display_helper.py PVM_models/demo00_unit.py other_trackers/center_vision_tracker.py PVM_tests/test_labeled_movie.py PVM_tools/labeled_movie.py tracker_tools/movie_to_PVM_pickle.py PVM_tests/test_MLP.py other_trackers/cmt_vision_tracker.py other_trackers/test_tld_basic.py PVM_framework/PVM_SignalProvider.py other_trackers/null_vision_tracker.py tracker_tools/export_to_zip.py PVM_models/run_tracking_benchmark.py PVM_tests/test_SharedArray.py docs/conf.py PVM_models/demo02_unit.py PVM_framework/PVM_Create.py PVM_tests/test_bounding_region.py PVM_framework/MLP.py other_trackers/bounding_boxer.py PVM_models/PVM_unit_test.py PVM_framework/PVM_datasets.py PVM_models/__init__.py PVM_tools/__init__.py PVM_models/demo03_unit.py other_trackers/color_vision_tracker.py PVM_models/demo01_run.py PVM_framework/AbstractExecutionUnit.py other_trackers/tld_vision_tracker.py PVM_framework/SyncUtils_python.py tracker_tools/raw_to_PVM_pickle.py PVM_framework/SharedArray.py PVM_tools/abstract_bounding_boxer.py tracker_tools/scale_PVM_pickle.py other_trackers/backprojection.py PVM_models/PVM_tracker.py tracker_tools/label_PVM_pickle.py tracker_tools/play_PVM_pickle.py PVM_framework/debug_logger.py other_trackers/test_struck_bindings.py PVM_framework/CoreUtils.py PVM_models/demo01_unit.py PVM_models/demo00_run.py other_trackers/struck_tracker.py PVM_models/demo02_run.py PVM_models/process_dream_data.py PVM_models/PVM_plot_error.py PVM_framework/__init__.py PVM_framework/AbstractExecutionManager.py other_trackers/__init__.py PVM_models/PVM_run.py PVM_models/demo03_run.py PVM_tests/test_create.py tracker_tools/images_to_PVM_pickle.py PVM_framework/PVM_upgrade_dict.py tracker_tools/upgrade_cloud_lib.py PVM_tools/benchmark.py PVM_tools/bounding_region.py other_trackers/setup_opentld.py PVM_models/PVM_unit_2step_residual_v1.py PVM_framework/PVM_debug_console.py other_trackers/setup_struck.py setup.py tracker_tools/__init__.py PVM_framework/PVM_options.py PVM_framework/PVM_Storage.py PVM_models/PVM_Manager.py PVM_models/demo04_run.py PVM_models/demo04_unit.py PVM_tests/test_fast_routines.py ver_to_float ColorHistogramBackProjection FloodFillBoundingBoxer CamshiftBoundingBoxer CenterVisionTracker CMTVisionTracker HSHistogramBackprojectionTracker BasicHistogramBackprojectionTracker UVHistogramBackprojectionTracker NullVisionTracker StruckTracker test_tld TLDVisionTracker AbstractSignalProvider ExecutionManager ExecutionUnit ModelExecution save_model load_legacy_pickle_file load_model _worker_code _supervisor_run mapped_load_global _run_debug_server _monitor_debug_session _run_debug_session run_model mapname load_legacy_pickle DebugLogger initialize_weights MLPPrototype MLP random_with_singular_values view_as_ndarray optimal_weight_initialization random_ortho test get_layers get_weights gather_surround connect_forward_and_back get_surround get_fan_in create_basic_unit_v1 upgrade_readout upgrade_dictionary_to_ver1_0 upgrade apply_options connect_back generate_v1 generate_dict_options create_blank_dictionary connect_forward_and_back_v1 PVMDataset InteractiveDictionaryExplorer DisplayHelperObject VideoRecorder get_option_help parse_options SimpleSignalProvider StereoSignalProvider TripleSignalProvider test_storage_local test_storage_remote get_S3_credentials Storage DynamicView SharedNumpyArray ShmemRawArray DoubleBufferedSharedNumpyArray ShmemBufferWrapper SharedNumpyArray_like np_type_id_to_ctypes Barrier Manager generate_dict run_demo ExecutionUnit Manager generate_dict run_demo ExecutionUnit Manager generate_dict run_demo ExecutionUnit generate_dict get_neighbours Manager run_demo create_unit_parameters ExecutionUnit Manager generate_dict run_demo ExecutionUnit disp process_data normalized_dot rotate get_square_res Manager plot_model plot_weight_dists runningMeanFast run_model cleanup PVMVisionTracker Manager ExecutionUnit ExecutionUnit ExecutionUnit test_initialization test_scaling test_mask_randomized test_box_intersection test_box_intersection_randomized test_area test_fundamentals test_PVM_crate test_generalized_outer dot_sigmoid test_dot_sigmoid test_dot_transpose test_dot_add test_derivative_dot test_dot_transpose_simple test_basic_channels_2 test_basic_channels_1 test_MLPN_eval00 test_perceptron02 test_MLPN_eval01 test_MLPN_xor_poly test_perceptron_two_layers test_perceptron01 test_MLPN_xor load test_pickling_darray test_IPC_attachability worker_test_IPC_attachability worker_test_sharedness_of_a_darray test_sharedness_of_an_array test_pickling_array worker_test_sharednes_of_an_array test_sharedness_of_a_darray save ver_to_float worker_test_barrier parrent_test_barrier test_barrier test_barrier_spinlocks AbstractBoundingBoxer AbstractVisionTracker GenericVisionTracker time_dir_name TrackerBenchmark BoundingRegion LabeledMovieFrame LabeledMovieHeader LabeledMovieWriter FrameCollection LabeledMovieReader export_to_zip ImagesToLabeledMovie LabelingApp convert_to_pickle ImagesToLabeledMovie scale_content split fromarray processImage COLOR_BGR2GRAY tuple set_width_and_height astype selectObject cvtColor TLD2 append find_class mapname StringIO Unpickler Unpickler str dump close GzipFile info str read close GzipFile info load_legacy_pickle seed int list cleanup ExecutionUnit exit method import_module getattr execution_steps sleep append set_flush_denormals range enumerate resume_workers Process join parent_barrier worker_barrier start Barrier import_module execution_steps info append sleep range flush print close cmdloop info makefile InteractiveDictionaryExplorer socket info close AF_INET connect stop sleep SOCK_STREAM listen str socket Thread stop accept join bind setsockopt SO_REUSEADDR AF_INET start info append SOCK_STREAM SOL_SOCKET flush start ModelExecution finish mat random qr array diag random_ortho concatenate list ndarray view append keys append initialize_weights SharedNumpyArray append float range len sqrt rand optimal_weight_initialization int uint8 read SharedNumpyArray glob cpu_count close strftime int64 info range getrandbits open append int range append range get_fan_in print SharedNumpyArray_like shape info append range get_fan_in print SharedNumpyArray_like shape info append range get_fan_in SharedNumpyArray_like info append range get_surround SharedNumpyArray_like range append dict generate_v1 cpu_count connect_back create_blank_dictionary generate_missing_parameters create_basic_unit_v1 shape append range gather_surround SharedNumpyArray import_module info float connect_forward_and_back_v1 enumerate int uint8 print SharedNumpyArray_like len insert len range info float range uint8 SharedNumpyArray range pop SharedNumpyArray upgrade_to_ver_1 SharedNumpyArray_like import_module info append float upgrade range list keys TextWrapper list keys copy Provider access_key info secret_key Credentials compile list_path Storage remove put remove put Storage list_path len isinstance from_address get_address ShmemBufferWrapper __init__ sizeof len c_int16 c_ssize_t c_int c_int32 c_byte c_int64 int uint8 generate_missing_parameters SharedNumpyArray cpu_count import_module append create_blank_dictionary range generate_dict save_model load_model print running Manager start finish isfile step ModelExecution astype int8 copyto float run_model VideoCapture error exit append get_neighbours create_unit_parameters imshow uint8 astype resize uint8 exp imwrite load_model print putText astype FONT_HERSHEY_SIMPLEX waitKey rotate imshow zeros normalized_dot range flush len shape PVM_MAX_LAYERS max range subplot hstack grid copy shape title hist array figure legend savefig prod enumerate get show load_model plot zeros_like xlabel grid ylabel put title savefig Storage figure legend range len strip sub compile save_model cpu_count SimpleSignalProvider put PVMDataset open all load_model cleanup exit upgrade_dictionary_to_ver1_0 apply_options upgrade get close parse_options Manager freeze_learning info float generate_dict_options enumerate int remove read print error min write dumps isfile set BoundingRegion zeros draw_box BoundingRegion draw_box BoundingRegion COLOR_BGR2GRAY findContours contourArea zeros cvtColor zeros scale draw_box BoundingRegion BoundingRegion get_mask get_box_intersection bitwise_and zeros set_image_shape BoundingRegion get_mask get_box_intersection bitwise_and randint range set_image_shape BoundingRegion get_mask randint rectangle zeros range get_fan_in get_surround time randn print ones derivative_dot_c zeros derivative_dot_cython dot_transpose_mul_c time dot_transpose_mul_cython randn print ones zeros time randn print dot_transpose_cython dot_transpose_c zeros time generalized_outer_cython randn print generalized_outer_c zeros time dot_sigmoid_c randn print zeros dot_sigmoid_cython dot_sigmoid time T randn print dot_add_c dot zeros LabeledMovieFrame create_channel zeros set_default_channel set_image LabeledMovieFrame create_channel zeros set_default_channel set_image seed evaluate MLP train get_layers randint get_weights array range seed evaluate MLP train get_layers randint get_weights array range seed evaluate MLP train get_layers randint get_weights array range evaluate MLP get_layers get_activation array evaluate MLP get_layers get_activation array seed evaluate MLP train get_layers get_weights array range seed evaluate MLP train get_layers get_weights array range dump GzipFile close GzipFile close load str SharedNumpyArray save float load str DoubleBufferedSharedNumpyArray SharedNumpyArray save uint32 float start Process join SharedNumpyArray Process join DoubleBufferedSharedNumpyArray SharedNumpyArray start uint32 float float SharedNumpyArray Process join SharedNumpyArray start float worker_barrier resume_workers Process join parent_barrier SharedNumpyArray start Barrier append float range parrent_test_barrier parrent_test_barrier skip localtime get_image writestr imwrite ZIP_DEFLATED get_label imencode get_targets close get_box_pixels enumerate FrameCollection resize ZipInfo empty ZipFile range load_from_file len LabeledMovieFrame VideoCapture read create_channel namedWindow tuple waitKey imshow moveWindow set_default_channel FrameCollection resize append write_to_file quit rot90 flip set_image tuple resize write_to_file scale_to_new_image_shape load_from_file create_channel draw_box get_label waitKey shape imshow range get_image set_label COLOR_BGR2GRAY get_targets copy Frame set_image float namedWindow moveWindow FrameCollection quit set_default_channel cvtColor len | # Predictive Vision Model ## Introduction This repository contains necessary files and demos used for running multicore simulations for the Predictive Vision Model paper entitled *"Unsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network"* (Piekniewski et al., 2016 (https://arxiv.org/abs/1607.06854)). The code includes a framework that can run a large number of python objects in parallel, synchronized by global barriers, utilizing state kept in shared memory. ## Starting up For the quickest setup get a clean Ubuntu 16.04 machine with as many compute cores as you can get (should also work fine on 15.10, 15.04 and 14.04 but 16.04 is tested). Next clone the repo and run the following commands: ``` git clone [email protected]:braincorp/PVM | 1,597 |
braunefe/BWEeval | ['word embeddings', 'bilingual lexicon induction', 'machine translation'] | ['Evaluating bilingual word embeddings on the long tail'] | scripts/average_for_eval_levensthein.py scripts/maxmarg/tripletModelConcat.py scripts/checkIfInSeed.py scripts/maxmarg/createTripletsSingleSource.py scripts/ridge/computeMostSimilar.py scripts/average_for_eval.py scripts/ridge/bwe_mapping.py scripts/maxmarg/computeMostSimilar.py scripts/countMatches.py load_valid average_n_distances get_options print_eval get_most_similar get_options get_options get_most_similar get_options parse_args add_option OptionParser strip split open str list defaultdict items load_valid sorted len params my_dists edit ratio append float keys enumerate print join list map print format | ### Evaluating bilingual word embeddings on the long tail Bilingual word embeddings are useful for bilingual lexicon induction, the task of mining translations of given words. Many studies have shown that bilingual word embeddings perform well for bilingual lexicon induction but they focused on frequent words in general domains. For many applications, bilingual lexicon induction of rare and domain-specific words is of critical importance. Therefore, we design a new task to evaluate bilingual word embeddings on rare words in different domains. We show that state-of-the-art approaches fail on this task and present simple new techniques to improve bilingual word | 1,598 |
brechtlaperre/DTW_measure | ['time series', 'dynamic time warping'] | ['Dynamic Time Warping as a New Evaluation for Dst Forecast with Machine Learning'] | src/model/LSTMnn.py src/dtw/dtw_m_e.py src/dtw/experiment.py src/model/persistence.py src/model/experiment.py src/data/build_input.py src/visualize/visualize.py src/data/preprocess.py src/dtw/dtw_measure.py src/model/metrics.py get_storm_dates build_input controlled_train_test_split get_storms extract_from_split shift_and_normalize extract_data store_data_sets split_data format_to_lstm_input preprocess_data add_features compose_date read_raw_data preprocess preprocess_raw _find_path _compute_dist _compute_dist_windowed dtw_measure _compute_cost eff_dtw_measure _eff_compute_dist_windowed _eff_compute_cost_windowed _eff_find_path reformat_dtw_res compute_measure load_testing_sets extract_continuous_intervals train_model save_test_forecast load_test_storm_dates load_testing_sets run_model numpy_to_dataloader main load_training_sets LSTMnn mean_error mae pod eval_active_state latency_metric plot_roc apply_threshold heidke_measure pofd store_to_csv prediction_efficiency evaluate far rmse linear_relation print_evaluation pearson_correlation evaluate_active_states freq_bias persistence_experiment extract_cont_intervals_from_index persistence_dtw_measure persistence_predict persistence_eval plot_set_of_storms get_range sort copy extract_from_split list hstack extract_from_split train_test_split arange len copy DataFrame transform dropna StandardScaler values fit shift_and_normalize items list T apply zeros values iterrows format print append len get_storm_dates controlled_train_test_split get_storms extract_data read_hdf store_data_sets split_data format_to_lstm_input preprocess_data diff asarray add_features compose_date drop read_raw_data preprocess to_hdf cumsum min range zeros argmin array range zeros sqrt enumerate ones sqrt infty enumerate _find_path _compute_dist _compute_dist_windowed eff_dtw_measure _compute_cost arange infty ones transpose square dia_matrix sqrt array range len print range zeros argmin array range _eff_find_path infty _eff_compute_dist_windowed _eff_compute_cost_windowed nonzero zip index DatetimeIndex difference append DataFrame T format set_index to_csv apply div sum array DataFrame dtw_measure reformat_dtw_res print load_testing_sets unique extract_continuous_intervals zeros abs range Tensor TensorDataset format inf criterion model backward print dataset zero_grad eval load_state_dict save train step range len load model eval load_state_dict Tensor train_model plot_set_of_storms build_input evaluate save_test_forecast print DataFrame load_test_storm_dates RMSprop MSELoss load_testing_sets LSTMnn parameters run_model numpy_to_dataloader load_training_sets len sqrt append sum std range fit append range prediction_efficiency mean_error mae rmse linear_relation pearson_correlation print format array2string list keys len zeros range apply_threshold heidke_measure far pofd pod freq_bias show apply_threshold format arange xlabel ylabel pofd title pod scatter zeros array range enumerate format print array2string eval_active_state plot_roc dtw_windowed unique zeros abs range shift format extract_cont_intervals_from_index print dtw_measure DataFrame copy index to_numpy unique persistence_predict zeros dropna abs range enumerate evaluate persistence_dtw_measure shape repeat persistence_predict zeros range enumerate difference Timedelta append datetime64 list controlled_train_test_split read_hdf dict rename enumerate persistence_eval subplots get_range set_major_formatter DataFrame show set_major_locator set_title set_xlabel savefig legend setp DateFormatter HourLocator format get_xticklabels tight_layout set lineplot enumerate set_ylabel len print empty DataFrame | # [Dynamic Time Warping as a New Evaluation for Dst Forecast with Machine Learning](https://www.frontiersin.org/articles/10.3389/fspas.2020.00039/full) __Authors:__ Brecht Laperre, Jorge Amaya, Giovanni Lapenta __Contact:__ [email protected] ---- [](https://mybinder.org/v2/gh/brechtlaperre/DTW_measure/master?filepath=notebooks) All datafiles can be found on [OSF](https://osf.io/nrh7m/), or recreated yourself by following the instructions in the `notebooks` folder. | 1,599 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.