title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Variational Graph Recurrent Neural Networks | 135 | neurips | 31 | 3 | 2023-06-15 23:44:05.805000 | https://github.com/VGraphRNN/VGRNN | 99 | Variational graph recurrent neural networks | https://scholar.google.com/scholar?cluster=8245745974174027367&hl=en&as_sdt=0,23 | 1 | 2,019 |
Stochastic Bandits with Context Distributions | 14 | neurips | 2 | 0 | 2023-06-15 23:44:05.988000 | https://github.com/jkirschner42/ContextDistributions | 0 | Stochastic bandits with context distributions | https://scholar.google.com/scholar?cluster=4525359308545283091&hl=en&as_sdt=0,47 | 2 | 2,019 |
Geometry-Aware Neural Rendering | 18 | neurips | 2 | 1 | 2023-06-15 23:44:06.170000 | https://github.com/josh-tobin/egqn-datasets | 12 | Geometry-aware neural rendering | https://scholar.google.com/scholar?cluster=13975640074645602977&hl=en&as_sdt=0,14 | 6 | 2,019 |
Training Language GANs from Scratch | 64 | neurips | 2,436 | 170 | 2023-06-15 23:44:06.352000 | https://github.com/deepmind/deepmind-research | 11,902 | Training language gans from scratch | https://scholar.google.com/scholar?cluster=8355933578151916965&hl=en&as_sdt=0,33 | 336 | 2,019 |
On the (In)fidelity and Sensitivity of Explanations | 265 | neurips | 5 | 1 | 2023-06-15 23:44:06.535000 | https://github.com/chihkuanyeh/saliency_evaluation | 18 | On the (in) fidelity and sensitivity of explanations | https://scholar.google.com/scholar?cluster=14868848543196386114&hl=en&as_sdt=0,49 | 5 | 2,019 |
Foundations of Comparison-Based Hierarchical Clustering | 27 | neurips | 0 | 0 | 2023-06-15 23:44:06.717000 | https://github.com/mperrot/ComparisonHC | 7 | Foundations of comparison-based hierarchical clustering | https://scholar.google.com/scholar?cluster=13988948004234767193&hl=en&as_sdt=0,33 | 1 | 2,019 |
Neural Similarity Learning | 22 | neurips | 5 | 0 | 2023-06-15 23:44:06.903000 | https://github.com/wy1iu/NSL | 33 | Neural similarity learning | https://scholar.google.com/scholar?cluster=1329367267940574099&hl=en&as_sdt=0,21 | 11 | 2,019 |
Global Convergence of Least Squares EM for Demixing Two Log-Concave Densities | 10 | neurips | 0 | 0 | 2023-06-15 23:44:07.085000 | https://github.com/weiiew28/Least-Squares-EM | 0 | Global convergence of least squares EM for demixing two log-concave densities | https://scholar.google.com/scholar?cluster=12181077309043150371&hl=en&as_sdt=0,33 | 2 | 2,019 |
First Exit Time Analysis of Stochastic Gradient Descent Under Heavy-Tailed Gradient Noise | 39 | neurips | 0 | 0 | 2023-06-15 23:44:07.268000 | https://github.com/umutsimsekli/sgd_first_exit_time | 2 | First exit time analysis of stochastic gradient descent under heavy-tailed gradient noise | https://scholar.google.com/scholar?cluster=11205216308770275792&hl=en&as_sdt=0,33 | 1 | 2,019 |
Hyper-Graph-Network Decoders for Block Codes | 50 | neurips | 6 | 0 | 2023-06-15 23:44:07.450000 | https://github.com/facebookresearch/HyperNetworkDecoder | 18 | Hyper-graph-network decoders for block codes | https://scholar.google.com/scholar?cluster=7373030351038141081&hl=en&as_sdt=0,14 | 5 | 2,019 |
Sparse Logistic Regression Learns All Discrete Pairwise Graphical Models | 51 | neurips | 1 | 0 | 2023-06-15 23:44:07.632000 | https://github.com/wushanshan/GraphLearn | 3 | Sparse logistic regression learns all discrete pairwise graphical models | https://scholar.google.com/scholar?cluster=18105640724379310504&hl=en&as_sdt=0,33 | 3 | 2,019 |
Coordinated hippocampal-entorhinal replay as structural inference | 13 | neurips | 0 | 0 | 2023-06-15 23:44:07.814000 | https://github.com/talfanevans/Coordinated_replay_for_structural_inference_NeurIPS_2019 | 0 | Coordinated hippocampal-entorhinal replay as structural inference | https://scholar.google.com/scholar?cluster=2193106409668155739&hl=en&as_sdt=0,44 | 1 | 2,019 |
Why Can't I Dance in the Mall? Learning to Mitigate Scene Bias in Action Recognition | 117 | neurips | 13 | 2 | 2023-06-15 23:44:07.997000 | https://github.com/vt-vl-lab/SDN | 81 | Why can't i dance in the mall? learning to mitigate scene bias in action recognition | https://scholar.google.com/scholar?cluster=7980230470759828284&hl=en&as_sdt=0,25 | 4 | 2,019 |
Invert to Learn to Invert | 73 | neurips | 12 | 0 | 2023-06-15 23:44:08.183000 | https://github.com/pputzky/invertible_rim | 35 | Invert to learn to invert | https://scholar.google.com/scholar?cluster=5749756993506538119&hl=en&as_sdt=0,11 | 3 | 2,019 |
Metamers of neural networks reveal divergence from human perceptual systems | 45 | neurips | 3 | 0 | 2023-06-15 23:44:08.377000 | https://github.com/jenellefeather/model_metamers | 4 | Metamers of neural networks reveal divergence from human perceptual systems | https://scholar.google.com/scholar?cluster=11487383284090666509&hl=en&as_sdt=0,14 | 1 | 2,019 |
Optimal Sparse Decision Trees | 151 | neurips | 9 | 5 | 2023-06-15 23:44:08.574000 | https://github.com/xiyanghu/OSDT | 87 | Optimal sparse decision trees | https://scholar.google.com/scholar?cluster=2250336388738514433&hl=en&as_sdt=0,5 | 6 | 2,019 |
Staying up to Date with Online Content Changes Using Reinforcement Learning for Scheduling | 9 | neurips | 15 | 0 | 2023-06-15 23:44:08.757000 | https://github.com/microsoft/Optimal-Freshness-Crawl-Scheduling | 34 | Staying up to date with online content changes using reinforcement learning for scheduling | https://scholar.google.com/scholar?cluster=817478686075207663&hl=en&as_sdt=0,33 | 12 | 2,019 |
This Looks Like That: Deep Learning for Interpretable Image Recognition | 763 | neurips | 104 | 17 | 2023-06-15 23:44:08.939000 | https://github.com/cfchen-duke/ProtoPNet | 280 | This looks like that: deep learning for interpretable image recognition | https://scholar.google.com/scholar?cluster=13319230358009390187&hl=en&as_sdt=0,5 | 9 | 2,019 |
Learning Dynamics of Attention: Human Prior for Interpretable Machine Reasoning | 7 | neurips | 6 | 0 | 2023-06-15 23:44:09.121000 | https://github.com/kakao/DAFT | 32 | Learning dynamics of attention: Human prior for interpretable machine reasoning | https://scholar.google.com/scholar?cluster=5091360286215129323&hl=en&as_sdt=0,3 | 10 | 2,019 |
Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes | 50 | neurips | 5 | 7 | 2023-06-15 23:44:09.303000 | https://github.com/revbucket/geometric-certificates | 40 | Provable certificates for adversarial examples: Fitting a ball in the union of polytopes | https://scholar.google.com/scholar?cluster=12220982508626507839&hl=en&as_sdt=0,33 | 4 | 2,019 |
Fast Parallel Algorithms for Statistical Subset Selection Problems | 8 | neurips | 0 | 0 | 2023-06-15 23:44:09.486000 | https://github.com/robo-sq/dash | 1 | Fast parallel algorithms for statistical subset selection problems | https://scholar.google.com/scholar?cluster=1744974239462418532&hl=en&as_sdt=0,34 | 2 | 2,019 |
On Lazy Training in Differentiable Programming | 646 | neurips | 1 | 0 | 2023-06-15 23:44:09.680000 | https://github.com/edouardoyallon/lazy-training-CNN | 11 | On lazy training in differentiable programming | https://scholar.google.com/scholar?cluster=7609132224233862548&hl=en&as_sdt=0,47 | 2 | 2,019 |
Estimating Convergence of Markov chains with L-Lag Couplings | 39 | neurips | 0 | 0 | 2023-06-15 23:44:09.862000 | https://github.com/niloyb/LlagCouplings | 8 | Estimating convergence of Markov chains with L-lag couplings | https://scholar.google.com/scholar?cluster=7664057036629656695&hl=en&as_sdt=0,5 | 3 | 2,019 |
Neural Multisensory Scene Inference | 7 | neurips | 0 | 0 | 2023-06-15 23:44:10.045000 | https://github.com/lim0606/pytorch-generative-multisensory-network | 2 | Neural multisensory scene inference | https://scholar.google.com/scholar?cluster=12739272795826190598&hl=en&as_sdt=0,5 | 5 | 2,019 |
Fixing Implicit Derivatives: Trust-Region Based Learning of Continuous Energy Functions | 6 | neurips | 3 | 0 | 2023-06-15 23:44:10.228000 | https://github.com/MatteoT90/WibergianLearning | 6 | Fixing implicit derivatives: Trust-region based learning of continuous energy functions | https://scholar.google.com/scholar?cluster=14745296383921099164&hl=en&as_sdt=0,31 | 5 | 2,019 |
Correlation Clustering with Adaptive Similarity Queries | 14 | neurips | 2 | 0 | 2023-06-15 23:44:10.411000 | https://github.com/AP15/NeurIPS_2019 | 0 | Correlation clustering with adaptive similarity queries | https://scholar.google.com/scholar?cluster=7825046887341981145&hl=en&as_sdt=0,5 | 1 | 2,019 |
Ease-of-Teaching and Language Structure from Emergent Communication | 77 | neurips | 0 | 0 | 2023-06-15 23:44:10.594000 | https://github.com/FushanLi/Ease-of-teaching-and-language-structure | 4 | Ease-of-teaching and language structure from emergent communication | https://scholar.google.com/scholar?cluster=9879290810297308236&hl=en&as_sdt=0,5 | 1 | 2,019 |
Practical Differentially Private Top-k Selection with Pay-what-you-get Composition | 50 | neurips | 0 | 0 | 2023-06-15 23:44:10.776000 | https://github.com/rrogers386/DPComposition | 0 | Practical differentially private top-k selection with pay-what-you-get composition | https://scholar.google.com/scholar?cluster=13096075295123378083&hl=en&as_sdt=0,10 | 3 | 2,019 |
muSSP: Efficient Min-cost Flow Algorithm for Multi-object Tracking | 22 | neurips | 25 | 0 | 2023-06-15 23:44:10.959000 | https://github.com/yu-lab-vt/muSSP | 92 | muSSP: Efficient min-cost flow algorithm for multi-object tracking | https://scholar.google.com/scholar?cluster=5651751699578908397&hl=en&as_sdt=0,36 | 5 | 2,019 |
Invertible Convolutional Flow | 37 | neurips | 0 | 1 | 2023-06-15 23:44:11.142000 | https://github.com/Karami-m/Invertible-Convolutional-Flow | 2 | Invertible convolutional flow | https://scholar.google.com/scholar?cluster=13011222781620393889&hl=en&as_sdt=0,33 | 2 | 2,019 |
Neural Relational Inference with Fast Modular Meta-learning | 53 | neurips | 11 | 2 | 2023-06-15 23:44:11.324000 | https://github.com/FerranAlet/modular-metalearning | 72 | Neural relational inference with fast modular meta-learning | https://scholar.google.com/scholar?cluster=10911682641501224691&hl=en&as_sdt=0,5 | 5 | 2,019 |
Learning to Predict Layout-to-image Conditional Convolutions for Semantic Image Synthesis | 159 | neurips | 16 | 5 | 2023-06-15 23:44:11.507000 | https://github.com/xh-liu/CC-FPSE | 120 | Learning to predict layout-to-image conditional convolutions for semantic image synthesis | https://scholar.google.com/scholar?cluster=10050858944004797007&hl=en&as_sdt=0,33 | 9 | 2,019 |
Approximate Inference Turns Deep Networks into Gaussian Processes | 80 | neurips | 4 | 0 | 2023-06-15 23:44:11.689000 | https://github.com/team-approx-bayes/dnn2gp | 45 | Approximate inference turns deep networks into gaussian processes | https://scholar.google.com/scholar?cluster=7367896344754763984&hl=en&as_sdt=0,33 | 2 | 2,019 |
SGD on Neural Networks Learns Functions of Increasing Complexity | 106 | neurips | 2 | 0 | 2023-06-15 23:44:11.873000 | https://github.com/anoneurips2019/SGD-learns-functions-of-increasing-complexity | 2 | Sgd on neural networks learns functions of increasing complexity | https://scholar.google.com/scholar?cluster=7545613427429088321&hl=en&as_sdt=0,33 | 0 | 2,019 |
Optimistic Distributionally Robust Optimization for Nonparametric Likelihood Approximation | 19 | neurips | 1 | 1 | 2023-06-15 23:44:12.056000 | https://github.com/sorooshafiee/Nonparam_Likelihood | 3 | Optimistic distributionally robust optimization for nonparametric likelihood approximation | https://scholar.google.com/scholar?cluster=4678014914038724524&hl=en&as_sdt=0,39 | 1 | 2,019 |
Don't take it lightly: Phasing optical random projections with unknown operators | 10 | neurips | 1 | 0 | 2023-06-15 23:44:12.239000 | https://github.com/swing-research/opu_phase | 5 | Don't take it lightly: Phasing optical random projections with unknown operators | https://scholar.google.com/scholar?cluster=17218217643749402266&hl=en&as_sdt=0,24 | 3 | 2,019 |
Visualizing the PHATE of Neural Networks | 22 | neurips | 9 | 1 | 2023-06-15 23:44:12.421000 | https://github.com/scottgigante/m-phate | 54 | Visualizing the phate of neural networks | https://scholar.google.com/scholar?cluster=10386490094735886479&hl=en&as_sdt=0,33 | 7 | 2,019 |
Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks | 201 | neurips | 28 | 11 | 2023-06-15 23:44:12.603000 | https://github.com/youzhonghui/gate-decorator-pruning | 187 | Gate decorator: Global filter pruning method for accelerating deep convolutional neural networks | https://scholar.google.com/scholar?cluster=364370970700584447&hl=en&as_sdt=0,5 | 10 | 2,019 |
Kalman Filter, Sensor Fusion, and Constrained Regression: Equivalences and Insights | 13 | neurips | 0 | 0 | 2023-06-15 23:44:12.785000 | https://github.com/mariajahja/kf-sf-flu-nowcasting | 8 | Kalman filter, sensor fusion, and constrained regression: Equivalences and insights | https://scholar.google.com/scholar?cluster=15806795739922263365&hl=en&as_sdt=0,5 | 2 | 2,019 |
Practical Deep Learning with Bayesian Principles | 202 | neurips | 22 | 5 | 2023-06-15 23:44:12.968000 | https://github.com/team-approx-bayes/dl-with-bayes | 238 | Practical deep learning with Bayesian principles | https://scholar.google.com/scholar?cluster=13355160445802491048&hl=en&as_sdt=0,5 | 14 | 2,019 |
Deep Active Learning with a Neural Architecture Search | 39 | neurips | 1 | 0 | 2023-06-15 23:44:13.151000 | https://github.com/geifmany/Active-inas | 6 | Deep active learning with a neural architecture search | https://scholar.google.com/scholar?cluster=17316861516778734731&hl=en&as_sdt=0,33 | 2 | 2,019 |
Quality Aware Generative Adversarial Networks | 28 | neurips | 3 | 0 | 2023-06-15 23:44:13.333000 | https://github.com/lfovia/QAGANS | 20 | Quality aware generative adversarial networks | https://scholar.google.com/scholar?cluster=7569135651621693544&hl=en&as_sdt=0,33 | 1 | 2,019 |
Control What You Can: Intrinsically Motivated Task-Planning Agent | 25 | neurips | 1 | 0 | 2023-06-15 23:44:13.516000 | https://github.com/s-bl/cwyc | 4 | Control what you can: Intrinsically motivated task-planning agent | https://scholar.google.com/scholar?cluster=4748849991670858589&hl=en&as_sdt=0,47 | 2 | 2,019 |
Momentum-Based Variance Reduction in Non-Convex SGD | 259 | neurips | 7,320 | 1,025 | 2023-06-15 23:44:13.699000 | https://github.com/google-research/google-research | 29,776 | Momentum-based variance reduction in non-convex sgd | https://scholar.google.com/scholar?cluster=15315656138665062900&hl=en&as_sdt=0,31 | 727 | 2,019 |
Adversarial Self-Defense for Cycle-Consistent GANs | 35 | neurips | 2 | 0 | 2023-06-15 23:44:13.883000 | https://github.com/dbash/pix2pix_cyclegan_guess_noise | 10 | Adversarial self-defense for cycle-consistent GANs | https://scholar.google.com/scholar?cluster=2846733163024685583&hl=en&as_sdt=0,33 | 2 | 2,019 |
Ultrametric Fitting by Gradient Descent | 25 | neurips | 3 | 0 | 2023-06-15 23:44:14.067000 | https://github.com/PerretB/ultrametric-fitting | 8 | Ultrametric fitting by gradient descent | https://scholar.google.com/scholar?cluster=1064532168086709457&hl=en&as_sdt=0,11 | 2 | 2,019 |
Expressive power of tensor-network factorizations for probabilistic modeling | 90 | neurips | 9 | 0 | 2023-06-15 23:44:14.249000 | https://github.com/glivan/tensor_networks_for_probabilistic_modeling | 28 | Expressive power of tensor-network factorizations for probabilistic modeling | https://scholar.google.com/scholar?cluster=973997541769819292&hl=en&as_sdt=0,33 | 5 | 2,019 |
Machine Teaching of Active Sequential Learners | 24 | neurips | 2 | 0 | 2023-06-15 23:44:14.432000 | https://github.com/AaltoPML/machine-teaching-of-active-sequential-learners | 9 | Machine teaching of active sequential learners | https://scholar.google.com/scholar?cluster=16295600938122436621&hl=en&as_sdt=0,5 | 12 | 2,019 |
Beyond Confidence Regions: Tight Bayesian Ambiguity Sets for Robust MDPs | 41 | neurips | 2 | 0 | 2023-06-15 23:44:14.615000 | https://github.com/marekpetrik/craam2 | 4 | Beyond confidence regions: Tight Bayesian ambiguity sets for robust MDPs | https://scholar.google.com/scholar?cluster=2675496836563141950&hl=en&as_sdt=0,5 | 1 | 2,019 |
Spatial-Aware Feature Aggregation for Image based Cross-View Geo-Localization | 111 | neurips | 6 | 2 | 2023-06-15 23:44:14.797000 | https://github.com/shiyujiao/SAFA | 34 | Spatial-aware feature aggregation for image based cross-view geo-localization | https://scholar.google.com/scholar?cluster=9193879788898998402&hl=en&as_sdt=0,14 | 1 | 2,019 |
Leveraging Labeled and Unlabeled Data for Consistent Fair Binary Classification | 68 | neurips | 0 | 0 | 2023-06-15 23:44:14.980000 | https://github.com/lucaoneto/NIPS2019_Fairness | 0 | Leveraging labeled and unlabeled data for consistent fair binary classification | https://scholar.google.com/scholar?cluster=2612431805502429071&hl=en&as_sdt=0,33 | 1 | 2,019 |
Tight Dimensionality Reduction for Sketching Low Degree Polynomial Kernels | 12 | neurips | 7,320 | 1,025 | 2023-06-15 23:44:15.162000 | https://github.com/google-research/google-research | 29,776 | Tight dimensionality reduction for sketching low degree polynomial kernels | https://scholar.google.com/scholar?cluster=2891379264114413860&hl=en&as_sdt=0,15 | 727 | 2,019 |
Regularized Anderson Acceleration for Off-Policy Deep Reinforcement Learning | 26 | neurips | 4 | 0 | 2023-06-15 23:44:15.345000 | https://github.com/shiwj16/raa-drl | 9 | Regularized Anderson acceleration for off-policy deep reinforcement learning | https://scholar.google.com/scholar?cluster=7454070558612114755&hl=en&as_sdt=0,33 | 2 | 2,019 |
Kernel Stein Tests for Multiple Model Comparison | 11 | neurips | 2 | 0 | 2023-06-15 23:44:15.527000 | https://github.com/jenninglim/model-comparison-test | 5 | Kernel stein tests for multiple model comparison | https://scholar.google.com/scholar?cluster=8758698782174042861&hl=en&as_sdt=0,5 | 4 | 2,019 |
Explanations can be manipulated and geometry is to blame | 245 | neurips | 10 | 2 | 2023-06-15 23:44:15.710000 | https://github.com/pankessel/explanations_can_be_manipulated | 31 | Explanations can be manipulated and geometry is to blame | https://scholar.google.com/scholar?cluster=14180570023451576122&hl=en&as_sdt=0,33 | 1 | 2,019 |
Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks | 34 | neurips | 3 | 1 | 2023-06-15 23:44:15.893000 | https://github.com/ayaabdelsalam91/Input-Cell-Attention | 12 | Input-cell attention reduces vanishing saliency of recurrent neural networks | https://scholar.google.com/scholar?cluster=4632327933243562924&hl=en&as_sdt=0,33 | 3 | 2,019 |
Paradoxes in Fair Machine Learning | 31 | neurips | 0 | 0 | 2023-06-15 23:44:16.075000 | https://github.com/pgoelz/equalized | 3 | Paradoxes in fair machine learning | https://scholar.google.com/scholar?cluster=18338740097234946174&hl=en&as_sdt=0,36 | 3 | 2,019 |
Volumetric Correspondence Networks for Optical Flow | 183 | neurips | 23 | 5 | 2023-06-15 23:44:16.258000 | https://github.com/gengshan-y/VCN | 147 | Volumetric correspondence networks for optical flow | https://scholar.google.com/scholar?cluster=16527531324179353765&hl=en&as_sdt=0,10 | 6 | 2,019 |
Towards Explaining the Regularization Effect of Initial Large Learning Rate in Training Neural Networks | 228 | neurips | 3 | 0 | 2023-06-15 23:44:16.441000 | https://github.com/cwein3/large-lr-code | 6 | Towards explaining the regularization effect of initial large learning rate in training neural networks | https://scholar.google.com/scholar?cluster=6617722188304549370&hl=en&as_sdt=0,36 | 2 | 2,019 |
Multi-marginal Wasserstein GAN | 76 | neurips | 10 | 0 | 2023-06-15 23:44:16.623000 | https://github.com/caojiezhang/MWGAN | 51 | Multi-marginal wasserstein gan | https://scholar.google.com/scholar?cluster=10067080185740979237&hl=en&as_sdt=0,33 | 1 | 2,019 |
PyTorch: An Imperative Style, High-Performance Deep Learning Library | 28,946 | neurips | 18,601 | 12,172 | 2023-06-15 23:44:16.806000 | https://github.com/pytorch/pytorch | 67,867 | Pytorch: An imperative style, high-performance deep learning library | https://scholar.google.com/scholar?cluster=3528934790668989119&hl=en&as_sdt=0,5 | 1,649 | 2,019 |
On Sample Complexity Upper and Lower Bounds for Exact Ranking from Noisy Comparisons | 15 | neurips | 0 | 0 | 2023-06-15 23:44:16.989000 | https://github.com/WenboRen/ranking-from-noisy-comparisons | 2 | On sample complexity upper and lower bounds for exact ranking from noisy comparisons | https://scholar.google.com/scholar?cluster=17371903028090691021&hl=en&as_sdt=0,41 | 2 | 2,019 |
NAT: Neural Architecture Transformer for Accurate and Compact Architectures | 77 | neurips | 12 | 0 | 2023-06-15 23:44:17.171000 | https://github.com/guoyongcs/NAT | 57 | Nat: Neural architecture transformer for accurate and compact architectures | https://scholar.google.com/scholar?cluster=2412256637570332418&hl=en&as_sdt=0,5 | 3 | 2,019 |
Learning to Self-Train for Semi-Supervised Few-Shot Classification | 262 | neurips | 11 | 9 | 2023-06-15 23:44:17.354000 | https://github.com/xinzheli1217/learning-to-self-train | 89 | Learning to self-train for semi-supervised few-shot classification | https://scholar.google.com/scholar?cluster=7879404109068143287&hl=en&as_sdt=0,5 | 8 | 2,019 |
Stochastic Frank-Wolfe for Composite Convex Minimization | 20 | neurips | 2 | 0 | 2023-06-15 23:44:17.545000 | https://github.com/alpyurtsever/SHCGM | 2 | Stochastic Frank-Wolfe for composite convex minimization | https://scholar.google.com/scholar?cluster=9717935113633697368&hl=en&as_sdt=0,33 | 1 | 2,019 |
Modeling Dynamic Functional Connectivity with Latent Factor Gaussian Processes | 8 | neurips | 1 | 0 | 2023-06-15 23:44:17.728000 | https://github.com/modestbayes/LFGP_NeurIPS | 4 | Modeling dynamic functional connectivity with latent factor Gaussian processes | https://scholar.google.com/scholar?cluster=14525732227397762230&hl=en&as_sdt=0,33 | 4 | 2,019 |
ETNet: Error Transition Network for Arbitrary Style Transfer | 20 | neurips | 5 | 2 | 2023-06-15 23:44:17.911000 | https://github.com/zhijieW94/ETNet | 77 | Etnet: Error transition network for arbitrary style transfer | https://scholar.google.com/scholar?cluster=11291490385512424160&hl=en&as_sdt=0,33 | 8 | 2,019 |
Icebreaker: Element-wise Efficient Information Acquisition with a Bayesian Deep Latent Gaussian Model | 35 | neurips | 14 | 1 | 2023-06-15 23:44:18.094000 | https://github.com/microsoft/Icebreaker | 42 | Icebreaker: Element-wise efficient information acquisition with a bayesian deep latent gaussian model | https://scholar.google.com/scholar?cluster=1836550825324169638&hl=en&as_sdt=0,5 | 8 | 2,019 |
Post training 4-bit quantization of convolutional networks for rapid-deployment | 427 | neurips | 57 | 13 | 2023-06-15 23:44:18.276000 | https://github.com/submission2019/cnn-quantization | 210 | Post training 4-bit quantization of convolutional networks for rapid-deployment | https://scholar.google.com/scholar?cluster=4498286641114478762&hl=en&as_sdt=0,36 | 8 | 2,019 |
Implicit Regularization in Deep Matrix Factorization | 359 | neurips | 12 | 1 | 2023-06-15 23:44:18.460000 | https://github.com/roosephu/deep_matrix_factorization | 29 | Implicit regularization in deep matrix factorization | https://scholar.google.com/scholar?cluster=10227179810482169638&hl=en&as_sdt=0,47 | 3 | 2,019 |
Limitations of Lazy Training of Two-layers Neural Network | 110 | neurips | 0 | 0 | 2023-06-15 23:44:18.643000 | https://github.com/bGhorbani/Lazy-Training-Neural-Nets | 1 | Limitations of lazy training of two-layers neural network | https://scholar.google.com/scholar?cluster=6757542555979455345&hl=en&as_sdt=0,14 | 3 | 2,019 |
A Meta-MDP Approach to Exploration for Lifelong Reinforcement Learning | 39 | neurips | 0 | 0 | 2023-06-15 23:44:18.825000 | https://github.com/fmaxgarcia/Meta-MDP | 8 | A meta-MDP approach to exploration for lifelong reinforcement learning | https://scholar.google.com/scholar?cluster=12587192655885814669&hl=en&as_sdt=0,5 | 3 | 2,019 |
Learning by Abstraction: The Neural State Machine | 223 | neurips | 124 | 15 | 2023-06-15 23:44:19.008000 | https://github.com/stanfordnlp/mac-network | 482 | Learning by abstraction: The neural state machine | https://scholar.google.com/scholar?cluster=7361406080192630148&hl=en&as_sdt=0,6 | 32 | 2,019 |
Unified Language Model Pre-training for Natural Language Understanding and Generation | 1,224 | neurips | 1,867 | 362 | 2023-06-15 23:44:19.190000 | https://github.com/microsoft/unilm | 12,770 | Unified language model pre-training for natural language understanding and generation | https://scholar.google.com/scholar?cluster=2361521774652423867&hl=en&as_sdt=0,5 | 260 | 2,019 |
Metric Learning for Adversarial Robustness | 156 | neurips | 8 | 0 | 2023-06-15 23:44:19.372000 | https://github.com/columbia/Metric_Learning_Adversarial_Robustness | 48 | Metric learning for adversarial robustness | https://scholar.google.com/scholar?cluster=12602705747887433697&hl=en&as_sdt=0,37 | 9 | 2,019 |
Fine-grained Optimization of Deep Neural Networks | 2 | neurips | 0 | 0 | 2023-06-15 23:44:19.555000 | https://github.com/meteozay/fg-sgd | 1 | Fine-grained optimization of deep neural networks | https://scholar.google.com/scholar?cluster=17242393399395222917&hl=en&as_sdt=0,31 | 1 | 2,019 |
Learning to Control Self-Assembling Morphologies: A Study of Generalization via Modularity | 86 | neurips | 16 | 3 | 2023-06-15 23:44:19.737000 | https://github.com/pathak22/modular-assemblies | 106 | Learning to control self-assembling morphologies: a study of generalization via modularity | https://scholar.google.com/scholar?cluster=6230712298907925889&hl=en&as_sdt=0,39 | 7 | 2,019 |
Alleviating Label Switching with Optimal Transport | 5 | neurips | 1 | 0 | 2023-06-15 23:44:19.920000 | https://github.com/pierremon/label-switching | 1 | Alleviating label switching with optimal transport | https://scholar.google.com/scholar?cluster=1201213527784885312&hl=en&as_sdt=0,10 | 1 | 2,019 |
Fisher Efficient Inference of Intractable Models | 11 | neurips | 1 | 0 | 2023-06-15 23:44:20.103000 | https://github.com/anewgithubname/Stein-Density-Ratio-Estimation | 9 | Fisher efficient inference of intractable models | https://scholar.google.com/scholar?cluster=13168405321313545565&hl=en&as_sdt=0,44 | 3 | 2,019 |
Stochastic Gradient Hamiltonian Monte Carlo Methods with Recursive Variance Reduction | 21 | neurips | 1 | 0 | 2023-06-15 23:44:20.285000 | https://github.com/knowzou/SRVR | 6 | Stochastic gradient Hamiltonian Monte Carlo methods with recursive variance reduction | https://scholar.google.com/scholar?cluster=11585981262585149330&hl=en&as_sdt=0,21 | 2 | 2,019 |
Domes to Drones: Self-Supervised Active Triangulation for 3D Human Pose Reconstruction | 22 | neurips | 2 | 2 | 2023-06-15 23:44:20.468000 | https://github.com/ErikGartner/actor | 11 | Domes to drones: Self-supervised active triangulation for 3d human pose reconstruction | https://scholar.google.com/scholar?cluster=592377778107181309&hl=en&as_sdt=0,5 | 3 | 2,019 |
SIC-MMAB: Synchronisation Involves Communication in Multiplayer Multi-Armed Bandits | 94 | neurips | 1 | 0 | 2023-06-15 23:44:20.651000 | https://github.com/eboursier/sic-mmab | 3 | SIC-MMAB: synchronisation involves communication in multiplayer multi-armed bandits | https://scholar.google.com/scholar?cluster=15702315682738134157&hl=en&as_sdt=0,47 | 2 | 2,019 |
A Step Toward Quantifying Independently Reproducible Machine Learning Research | 92 | neurips | 5 | 0 | 2023-06-15 23:44:20.834000 | https://github.com/EdwardRaff/Quantifying-Independently-Reproducible-ML | 74 | A step toward quantifying independently reproducible machine learning research | https://scholar.google.com/scholar?cluster=3230939669723958133&hl=en&as_sdt=0,44 | 6 | 2,019 |
Latent distance estimation for random geometric graphs | 21 | neurips | 0 | 0 | 2023-06-15 23:44:21.017000 | https://github.com/ErnestoArayaValdivia/NeurIPS_Code | 0 | Latent distance estimation for random geometric graphs | https://scholar.google.com/scholar?cluster=2780885878815062825&hl=en&as_sdt=0,23 | 1 | 2,019 |
On the Inductive Bias of Neural Tangent Kernels | 205 | neurips | 3 | 1 | 2023-06-15 23:44:21.200000 | https://github.com/albietz/ckn_kernel | 13 | On the inductive bias of neural tangent kernels | https://scholar.google.com/scholar?cluster=4267008353441249556&hl=en&as_sdt=0,5 | 2 | 2,019 |
Rethinking Kernel Methods for Node Representation Learning on Graphs | 20 | neurips | 7 | 3 | 2023-06-15 23:44:21.383000 | https://github.com/bluer555/KernelGCN | 32 | Rethinking kernel methods for node representation learning on graphs | https://scholar.google.com/scholar?cluster=3909779312042974366&hl=en&as_sdt=0,5 | 4 | 2,019 |
Input Similarity from the Neural Network Perspective | 40 | neurips | 3 | 0 | 2023-06-15 23:44:21.580000 | https://github.com/Lydorn/netsimilarity | 26 | Input similarity from the neural network perspective | https://scholar.google.com/scholar?cluster=3029405318289332183&hl=en&as_sdt=0,5 | 3 | 2,019 |
Transfer Learning via Minimizing the Performance Gap Between Domains | 40 | neurips | 2 | 0 | 2023-06-15 23:44:21.762000 | https://github.com/bwang-ml/gapBoost | 7 | Transfer learning via minimizing the performance gap between domains | https://scholar.google.com/scholar?cluster=15708830539707170384&hl=en&as_sdt=0,44 | 3 | 2,019 |
Catastrophic Forgetting Meets Negative Transfer: Batch Spectral Shrinkage for Safe Transfer Learning | 103 | neurips | 3 | 0 | 2023-06-15 23:44:21.946000 | https://github.com/thuml/Batch-Spectral-Shrinkage | 21 | Catastrophic forgetting meets negative transfer: Batch spectral shrinkage for safe transfer learning | https://scholar.google.com/scholar?cluster=787372724003726768&hl=en&as_sdt=0,5 | 3 | 2,019 |
Efficiently Learning Fourier Sparse Set Functions | 14 | neurips | 0 | 0 | 2023-06-15 23:44:22.128000 | https://github.com/andisheh94/Efficiently-Learning-Fourier-Sparse-Set-Functions | 2 | Efficiently learning Fourier sparse set functions | https://scholar.google.com/scholar?cluster=1775522679298532934&hl=en&as_sdt=0,34 | 1 | 2,019 |
Goal-conditioned Imitation Learning | 151 | neurips | 9 | 5 | 2023-06-15 23:44:22.310000 | https://github.com/dingyiming0427/goalgail | 60 | Goal-conditioned imitation learning | https://scholar.google.com/scholar?cluster=9705309728838214557&hl=en&as_sdt=0,5 | 3 | 2,019 |
Superset Technique for Approximate Recovery in One-Bit Compressed Sensing | 13 | neurips | 3 | 0 | 2023-06-15 23:44:22.493000 | https://github.com/flodinl/neurips-1bCS | 0 | Superset technique for approximate recovery in one-bit compressed sensing | https://scholar.google.com/scholar?cluster=5088393971521119646&hl=en&as_sdt=0,31 | 1 | 2,019 |
Asymptotic Guarantees for Learning Generative Models with the Sliced-Wasserstein Distance | 51 | neurips | 0 | 0 | 2023-06-15 23:44:22.676000 | https://github.com/kimiandj/min_swe | 2 | Asymptotic guarantees for learning generative models with the sliced-Wasserstein distance | https://scholar.google.com/scholar?cluster=10949056232611348982&hl=en&as_sdt=0,34 | 2 | 2,019 |
Learning Nonsymmetric Determinantal Point Processes | 41 | neurips | 5 | 6 | 2023-06-15 23:44:22.859000 | https://github.com/cgartrel/nonsymmetric-DPP-learning | 20 | Learning nonsymmetric determinantal point processes | https://scholar.google.com/scholar?cluster=958215336299287859&hl=en&as_sdt=0,5 | 3 | 2,019 |
Quantum Embedding of Knowledge for Reasoning | 35 | neurips | 11 | 4 | 2023-06-15 23:44:23.042000 | https://github.com/IBM/e2r | 22 | Quantum embedding of knowledge for reasoning | https://scholar.google.com/scholar?cluster=11321153699952712196&hl=en&as_sdt=0,33 | 10 | 2,019 |
Online Normalization for Training Neural Networks | 40 | neurips | 19 | 2 | 2023-06-15 23:44:23.224000 | https://github.com/cerebras/online-normalization | 74 | Online normalization for training neural networks | https://scholar.google.com/scholar?cluster=2495221729297962361&hl=en&as_sdt=0,5 | 7 | 2,019 |
Equitable Stable Matchings in Quadratic Time | 8 | neurips | 2 | 0 | 2023-06-15 23:44:23.407000 | https://github.com/ntzia/stable-marriage | 1 | Equitable stable matchings in quadratic time | https://scholar.google.com/scholar?cluster=5357034451332937688&hl=en&as_sdt=0,34 | 2 | 2,019 |
Making AI Forget You: Data Deletion in Machine Learning | 209 | neurips | 4 | 1 | 2023-06-15 23:44:23.590000 | https://github.com/tginart/deletion-efficient-kmeans | 22 | Making ai forget you: Data deletion in machine learning | https://scholar.google.com/scholar?cluster=11624023015366681673&hl=en&as_sdt=0,5 | 4 | 2,019 |
A New Defense Against Adversarial Images: Turning a Weakness into a Strength | 103 | neurips | 10 | 0 | 2023-06-15 23:44:23.773000 | https://github.com/s-huu/TurningWeaknessIntoStrength | 36 | A new defense against adversarial images: Turning a weakness into a strength | https://scholar.google.com/scholar?cluster=11699672055738649895&hl=en&as_sdt=0,47 | 5 | 2,019 |
Divergence-Augmented Policy Optimization | 9 | neurips | 4 | 0 | 2023-06-15 23:44:23.955000 | https://github.com/lns/dapo | 36 | Divergence-augmented policy optimization | https://scholar.google.com/scholar?cluster=6823081176814326206&hl=en&as_sdt=0,33 | 3 | 2,019 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.