title
stringlengths 8
155
| citations_google_scholar
int64 0
28.9k
| conference
stringclasses 5
values | forks
int64 0
46.3k
| issues
int64 0
12.2k
| lastModified
stringlengths 19
26
| repo_url
stringlengths 26
130
| stars
int64 0
75.9k
| title_google_scholar
stringlengths 8
155
| url_google_scholar
stringlengths 75
206
| watchers
int64 0
2.77k
| year
int64 2.02k
2.02k
|
---|---|---|---|---|---|---|---|---|---|---|---|
How to Characterize The Landscape of Overparameterized Convolutional Neural Networks | 9 | neurips | 0 | 0 | 2023-06-16 15:10:13.489000 | https://github.com/wmyw96/convex-cnn-tf | 0 | How to characterize the landscape of overparameterized convolutional neural networks | https://scholar.google.com/scholar?cluster=16949672964324049904&hl=en&as_sdt=0,5 | 1 | 2,020 |
Adaptive Discretization for Model-Based Reinforcement Learning | 19 | neurips | 2 | 0 | 2023-06-16 15:10:13.680000 | https://github.com/seanrsinclair/AdaptiveQLearning | 1 | Adaptive discretization for model-based reinforcement learning | https://scholar.google.com/scholar?cluster=16783221082226799&hl=en&as_sdt=0,46 | 1 | 2,020 |
CodeCMR: Cross-Modal Retrieval For Function-Level Binary Source Code Matching | 39 | neurips | 13 | 4 | 2023-06-16 15:10:13.872000 | https://github.com/binaryai/CodeCMR | 43 | Codecmr: Cross-modal retrieval for function-level binary source code matching | https://scholar.google.com/scholar?cluster=8935328746274345549&hl=en&as_sdt=0,19 | 4 | 2,020 |
DAGs with No Fears: A Closer Look at Continuous Optimization for Learning Bayesian Networks | 30 | neurips | 2 | 1 | 2023-06-16 15:10:14.065000 | https://github.com/skypea/DAG_No_Fear | 10 | DAGs with No Fears: A closer look at continuous optimization for learning Bayesian networks | https://scholar.google.com/scholar?cluster=1019446956575519869&hl=en&as_sdt=0,50 | 3 | 2,020 |
Teaching a GAN What Not to Learn | 14 | neurips | 6 | 1 | 2023-06-16 15:10:14.257000 | https://github.com/DarthSid95/RumiGANs | 29 | Teaching a gan what not to learn | https://scholar.google.com/scholar?cluster=5006743411241941329&hl=en&as_sdt=0,3 | 2 | 2,020 |
Rethinking Learnable Tree Filter for Generic Feature Transform | 13 | neurips | 9 | 7 | 2023-06-16 15:10:14.459000 | https://github.com/StevenGrove/LearnableTreeFilterV2 | 89 | Rethinking learnable tree filter for generic feature transform | https://scholar.google.com/scholar?cluster=18019390806247170102&hl=en&as_sdt=0,33 | 2 | 2,020 |
Self-Supervised Relational Reasoning for Representation Learning | 43 | neurips | 24 | 0 | 2023-06-16 15:10:14.652000 | https://github.com/mpatacchiola/self-supervised-relational-reasoning | 136 | Self-supervised relational reasoning for representation learning | https://scholar.google.com/scholar?cluster=4065282984130236161&hl=en&as_sdt=0,44 | 7 | 2,020 |
Sufficient dimension reduction for classification using principal optimal transport direction | 12 | neurips | 2 | 0 | 2023-06-16 15:10:14.844000 | https://github.com/ChengzijunAixiaoli/POTD | 4 | Sufficient dimension reduction for classification using principal optimal transport direction | https://scholar.google.com/scholar?cluster=9453699128109678882&hl=en&as_sdt=0,5 | 1 | 2,020 |
Fast Epigraphical Projection-based Incremental Algorithms for Wasserstein Distributionally Robust Support Vector Machine | 11 | neurips | 2 | 0 | 2023-06-16 15:10:15.039000 | https://github.com/gerrili1996/Incremental_DRSVM | 0 | Fast epigraphical projection-based incremental algorithms for Wasserstein distributionally robust support vector machine | https://scholar.google.com/scholar?cluster=17557069801985892953&hl=en&as_sdt=0,33 | 2 | 2,020 |
Adaptive Reduced Rank Regression | 14 | neurips | 5 | 0 | 2023-06-16 15:10:15.255000 | https://github.com/Qiong-WU/ARRR_code | 29 | Adaptive reduced rank regression | https://scholar.google.com/scholar?cluster=833219182915456157&hl=en&as_sdt=0,48 | 2 | 2,020 |
Learning Loss for Test-Time Augmentation | 50 | neurips | 2 | 2 | 2023-06-16 15:10:15.466000 | https://github.com/bayesgroup/gps-augment | 35 | Learning loss for test-time augmentation | https://scholar.google.com/scholar?cluster=11423734549303606224&hl=en&as_sdt=0,30 | 12 | 2,020 |
Balanced Meta-Softmax for Long-Tailed Visual Recognition | 238 | neurips | 10 | 0 | 2023-06-16 15:10:15.661000 | https://github.com/jiawei-ren/BalancedMetaSoftmax | 66 | Balanced meta-softmax for long-tailed visual recognition | https://scholar.google.com/scholar?cluster=6313928950899865573&hl=en&as_sdt=0,5 | 6 | 2,020 |
MDP Homomorphic Networks: Group Symmetries in Reinforcement Learning | 78 | neurips | 2 | 0 | 2023-06-16 15:10:15.854000 | https://github.com/ElisevanderPol/mdp-homomorphic-networks | 22 | Mdp homomorphic networks: Group symmetries in reinforcement learning | https://scholar.google.com/scholar?cluster=3290101781386627154&hl=en&as_sdt=0,5 | 2 | 2,020 |
Object Goal Navigation using Goal-Oriented Semantic Exploration | 259 | neurips | 44 | 6 | 2023-06-16 15:10:16.047000 | https://github.com/devendrachaplot/Object-Goal-Navigation | 169 | Object goal navigation using goal-oriented semantic exploration | https://scholar.google.com/scholar?cluster=2452364222221336490&hl=en&as_sdt=0,5 | 5 | 2,020 |
Efficient semidefinite-programming-based inference for binary and multi-class MRFs | 3 | neurips | 0 | 0 | 2023-06-16 15:10:16.241000 | https://github.com/locuslab/sdp_mrf | 3 | Efficient semidefinite-programming-based inference for binary and multi-class MRFs | https://scholar.google.com/scholar?cluster=795899549396489666&hl=en&as_sdt=0,33 | 6 | 2,020 |
Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing | 123 | neurips | 15 | 9 | 2023-06-16 15:10:16.467000 | https://github.com/laiguokun/Funnel-Transformer | 197 | Funnel-transformer: Filtering out sequential redundancy for efficient language processing | https://scholar.google.com/scholar?cluster=13758108828249747636&hl=en&as_sdt=0,33 | 11 | 2,020 |
Semantic Visual Navigation by Watching YouTube Videos | 48 | neurips | 0 | 19 | 2023-06-16 15:10:16.660000 | https://github.com/MatthewChang/video-dqn | 22 | Semantic visual navigation by watching youtube videos | https://scholar.google.com/scholar?cluster=5339143065575935853&hl=en&as_sdt=0,39 | 2 | 2,020 |
Learning Differential Equations that are Easy to Solve | 76 | neurips | 31 | 4 | 2023-06-16 15:10:16.853000 | https://github.com/jacobjinkelly/easy-neural-ode | 245 | Learning differential equations that are easy to solve | https://scholar.google.com/scholar?cluster=17384297955183349294&hl=en&as_sdt=0,44 | 10 | 2,020 |
Influence-Augmented Online Planning for Complex Environments | 7 | neurips | 1 | 0 | 2023-06-16 15:10:17.045000 | https://github.com/INFLUENCEorg/IAOP | 3 | Influence-augmented online planning for complex environments | https://scholar.google.com/scholar?cluster=11045895327185763569&hl=en&as_sdt=0,5 | 3 | 2,020 |
Probabilistic Time Series Forecasting with Shape and Temporal Diversity | 21 | neurips | 16 | 2 | 2023-06-16 15:10:17.237000 | https://github.com/vincent-leguen/STRIPE | 74 | Probabilistic time series forecasting with shape and temporal diversity | https://scholar.google.com/scholar?cluster=1337249375985233521&hl=en&as_sdt=0,30 | 3 | 2,020 |
Continual Deep Learning by Functional Regularisation of Memorable Past | 82 | neurips | 4 | 1 | 2023-06-16 15:10:17.443000 | https://github.com/team-approx-bayes/fromp | 37 | Continual deep learning by functional regularisation of memorable past | https://scholar.google.com/scholar?cluster=10115135321591353527&hl=en&as_sdt=0,33 | 2 | 2,020 |
Distance Encoding: Design Provably More Powerful Neural Networks for Graph Representation Learning | 162 | neurips | 23 | 1 | 2023-06-16 15:10:17.637000 | https://github.com/snap-stanford/distance-encoding | 173 | Distance encoding: Design provably more powerful neural networks for graph representation learning | https://scholar.google.com/scholar?cluster=6342884862045270520&hl=en&as_sdt=0,5 | 8 | 2,020 |
Fast Fourier Convolution | 137 | neurips | 28 | 8 | 2023-06-16 15:10:17.829000 | https://github.com/pkumivision/FFC | 237 | Fast fourier convolution | https://scholar.google.com/scholar?cluster=2160547042943986472&hl=en&as_sdt=0,33 | 4 | 2,020 |
Learning Structured Distributions From Untrusted Batches: Faster and Simpler | 12 | neurips | 0 | 0 | 2023-06-16 15:10:18.022000 | https://github.com/secanth/federated | 1 | Learning structured distributions from untrusted batches: Faster and simpler | https://scholar.google.com/scholar?cluster=8991328284889449701&hl=en&as_sdt=0,33 | 1 | 2,020 |
Diversity can be Transferred: Output Diversification for White- and Black-box Attacks | 49 | neurips | 7 | 0 | 2023-06-16 15:10:18.215000 | https://github.com/ermongroup/ODS | 50 | Diversity can be transferred: Output diversification for white-and black-box attacks | https://scholar.google.com/scholar?cluster=13509573931669660487&hl=en&as_sdt=0,47 | 8 | 2,020 |
Efficient Low Rank Gaussian Variational Inference for Neural Networks | 19 | neurips | 1 | 1 | 2023-06-16 15:10:18.408000 | https://github.com/marctom/elrgvi | 2 | Efficient low rank gaussian variational inference for neural networks | https://scholar.google.com/scholar?cluster=9190851527291082244&hl=en&as_sdt=0,5 | 2 | 2,020 |
Probabilistic Circuits for Variational Inference in Discrete Graphical Models | 19 | neurips | 0 | 0 | 2023-06-16 15:10:18.600000 | https://github.com/AndyShih12/SPN_Variational_Inference | 14 | Probabilistic circuits for variational inference in discrete graphical models | https://scholar.google.com/scholar?cluster=8548433346916922000&hl=en&as_sdt=0,32 | 2 | 2,020 |
Labelling unlabelled videos from scratch with multi-modal self-supervision | 108 | neurips | 14 | 4 | 2023-06-16 15:10:18.792000 | https://github.com/facebookresearch/selavi | 108 | Labelling unlabelled videos from scratch with multi-modal self-supervision | https://scholar.google.com/scholar?cluster=6374132588879486685&hl=en&as_sdt=0,5 | 12 | 2,020 |
Bayesian Deep Learning and a Probabilistic Perspective of Generalization | 405 | neurips | 34 | 6 | 2023-06-16 15:10:18.984000 | https://github.com/izmailovpavel/understandingbdl | 215 | Bayesian deep learning and a probabilistic perspective of generalization | https://scholar.google.com/scholar?cluster=13252502369933124881&hl=en&as_sdt=0,5 | 6 | 2,020 |
Unsupervised Learning of Object Landmarks via Self-Training Correspondence | 10 | neurips | 7 | 0 | 2023-06-16 15:10:19.176000 | https://github.com/malldimi1/UnsupervisedLandmarks | 22 | Unsupervised learning of object landmarks via self-training correspondence | https://scholar.google.com/scholar?cluster=5849709549192485830&hl=en&as_sdt=0,10 | 1 | 2,020 |
Generative View Synthesis: From Single-view Semantics to Novel-view Images | 13 | neurips | 1 | 0 | 2023-06-16 15:10:19.382000 | https://github.com/tedyhabtegebrial/gvsnet | 20 | Generative view synthesis: From single-view semantics to novel-view images | https://scholar.google.com/scholar?cluster=6878036878351382558&hl=en&as_sdt=0,5 | 5 | 2,020 |
Deep Variational Instance Segmentation | 8 | neurips | 4 | 5 | 2023-06-16 15:10:19.576000 | https://github.com/jia2lin3yuan1/2020-instanceSeg | 25 | Deep variational instance segmentation | https://scholar.google.com/scholar?cluster=16407992024714041715&hl=en&as_sdt=0,5 | 4 | 2,020 |
Learning Implicit Functions for Topology-Varying Dense 3D Shape Correspondence | 25 | neurips | 11 | 4 | 2023-06-16 15:10:19.769000 | https://github.com/liuf1990/Implicit_Dense_Correspondence | 55 | Learning implicit functions for topology-varying dense 3d shape correspondence | https://scholar.google.com/scholar?cluster=13134506600576574791&hl=en&as_sdt=0,10 | 9 | 2,020 |
Hierarchically Organized Latent Modules for Exploratory Search in Morphogenetic Systems | 12 | neurips | 1 | 0 | 2023-06-16 15:10:19.961000 | https://github.com/flowersteam/holmes | 5 | Hierarchically organized latent modules for exploratory search in morphogenetic systems | https://scholar.google.com/scholar?cluster=8455312043752460622&hl=en&as_sdt=0,5 | 1 | 2,020 |
Probabilistic Orientation Estimation with Matrix Fisher Distributions | 28 | neurips | 2 | 1 | 2023-06-16 15:10:20.152000 | https://github.com/Davmo049/Public_prob_orientation_estimation_with_matrix_fisher_distributions | 20 | Probabilistic orientation estimation with matrix fisher distributions | https://scholar.google.com/scholar?cluster=13738889246738372199&hl=en&as_sdt=0,36 | 3 | 2,020 |
Minimax Dynamics of Optimally Balanced Spiking Networks of Excitatory and Inhibitory Neurons | 3 | neurips | 0 | 0 | 2023-06-16 15:10:20.345000 | https://github.com/Pehlevan-Group/BalancedEIMinimax | 3 | Minimax dynamics of optimally balanced spiking networks of excitatory and inhibitory neurons | https://scholar.google.com/scholar?cluster=13427818061137891735&hl=en&as_sdt=0,5 | 4 | 2,020 |
Towards Deeper Graph Neural Networks with Differentiable Group Normalization | 125 | neurips | 4 | 1 | 2023-06-16 15:10:20.537000 | https://github.com/Kaixiong-Zhou/DGN | 32 | Towards deeper graph neural networks with differentiable group normalization | https://scholar.google.com/scholar?cluster=15936617451529189150&hl=en&as_sdt=0,34 | 2 | 2,020 |
Stochastic Optimization for Performative Prediction | 66 | neurips | 4 | 1 | 2023-06-16 15:10:20.729000 | https://github.com/zykls/performative-prediction | 20 | Stochastic optimization for performative prediction | https://scholar.google.com/scholar?cluster=17793048602767737159&hl=en&as_sdt=0,5 | 3 | 2,020 |
Domain Adaptation as a Problem of Inference on Graphical Models | 44 | neurips | 3 | 0 | 2023-06-16 15:10:20.921000 | https://github.com/mgong2/DA_Infer | 26 | Domain adaptation as a problem of inference on graphical models | https://scholar.google.com/scholar?cluster=15196795471254372547&hl=en&as_sdt=0,44 | 2 | 2,020 |
HOI Analysis: Integrating and Decomposing Human-Object Interaction | 58 | neurips | 46 | 2 | 2023-06-16 15:10:21.114000 | https://github.com/DirtyHarryLYL/HAKE-Action-Torch | 201 | Hoi analysis: Integrating and decomposing human-object interaction | https://scholar.google.com/scholar?cluster=1869809068174176654&hl=en&as_sdt=0,45 | 11 | 2,020 |
Strongly local p-norm-cut algorithms for semi-supervised learning and local graph clustering | 11 | neurips | 1 | 0 | 2023-06-16 15:10:21.305000 | https://github.com/MengLiuPurdue/SLQ | 2 | Strongly local p-norm-cut algorithms for semi-supervised learning and local graph clustering | https://scholar.googleusercontent.com/scholar?q=cache:qcklNVm80uwJ:scholar.google.com/+Strongly+local+p-norm-cut+algorithms+for+semi-supervised+learning+and+local+graph+clustering&hl=en&as_sdt=0,33 | 3 | 2,020 |
Deep Direct Likelihood Knockoffs | 16 | neurips | 4 | 0 | 2023-06-16 15:10:21.497000 | https://github.com/rajesh-lab/ddlk | 7 | Deep direct likelihood knockoffs | https://scholar.google.com/scholar?cluster=6129032431811553962&hl=en&as_sdt=0,33 | 5 | 2,020 |
Meta-Neighborhoods | 10 | neurips | 3 | 0 | 2023-06-16 15:10:21.688000 | https://github.com/lupalab/Meta-Neighborhoods | 7 | Meta-neighborhoods | https://scholar.google.com/scholar?cluster=2219636310662669974&hl=en&as_sdt=0,1 | 2 | 2,020 |
A new inference approach for training shallow and deep generalized linear models of noisy interacting neurons | 8 | neurips | 1 | 0 | 2023-06-16 15:10:21.880000 | https://github.com/gmahuas/2stepGLM | 1 | A new inference approach for training shallow and deep generalized linear models of noisy interacting neurons | https://scholar.google.com/scholar?cluster=14549343330955673745&hl=en&as_sdt=0,14 | 1 | 2,020 |
Feature Importance Ranking for Deep Learning | 66 | neurips | 14 | 2 | 2023-06-16 15:10:22.072000 | https://github.com/maksym33/FeatureImportanceDL | 31 | Feature importance ranking for deep learning | https://scholar.google.com/scholar?cluster=10291349468278084866&hl=en&as_sdt=0,14 | 3 | 2,020 |
Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks | 40 | neurips | 1 | 0 | 2023-06-16 15:10:22.265000 | https://github.com/umutsimsekli/Hausdorff-Dimension-and-Generalization | 2 | Hausdorff dimension, heavy tails, and generalization in neural networks | https://scholar.google.com/scholar?cluster=8886979563776893274&hl=en&as_sdt=0,5 | 3 | 2,020 |
Learning Physical Constraints with Neural Projections | 24 | neurips | 2 | 1 | 2023-06-16 15:10:22.482000 | https://github.com/y-sq/neural_proj | 15 | Learning physical constraints with neural projections | https://scholar.google.com/scholar?cluster=1914156028148083261&hl=en&as_sdt=0,33 | 2 | 2,020 |
Robust Optimization for Fairness with Noisy Protected Groups | 85 | neurips | 2 | 1 | 2023-06-16 15:10:22.675000 | https://github.com/wenshuoguo/robust-fairness-code | 6 | Robust optimization for fairness with noisy protected groups | https://scholar.google.com/scholar?cluster=5111841011798470081&hl=en&as_sdt=0,5 | 1 | 2,020 |
Noise-Contrastive Estimation for Multivariate Point Processes | 14 | neurips | 2 | 0 | 2023-06-16 15:10:22.867000 | https://github.com/HMEIatJHU/nce-mpp | 15 | Noise-contrastive estimation for multivariate point processes | https://scholar.google.com/scholar?cluster=10618761970260910492&hl=en&as_sdt=0,5 | 3 | 2,020 |
Multiscale Deep Equilibrium Models | 139 | neurips | 29 | 0 | 2023-06-16 15:10:23.060000 | https://github.com/locuslab/mdeq | 222 | Multiscale deep equilibrium models | https://scholar.google.com/scholar?cluster=9858453803735938369&hl=en&as_sdt=0,5 | 13 | 2,020 |
Sparse Graphical Memory for Robust Planning | 41 | neurips | 8 | 1 | 2023-06-16 15:10:23.265000 | https://github.com/scottemmons/sgm | 28 | Sparse graphical memory for robust planning | https://scholar.google.com/scholar?cluster=14782939310889640294&hl=en&as_sdt=0,36 | 6 | 2,020 |
Modeling Task Effects on Meaning Representation in the Brain via Zero-Shot MEG Prediction | 13 | neurips | 1 | 0 | 2023-06-16 15:10:23.461000 | https://github.com/otiliastr/brain_task_effect | 6 | Modeling task effects on meaning representation in the brain via zero-shot meg prediction | https://scholar.google.com/scholar?cluster=342859305245260360&hl=en&as_sdt=0,23 | 3 | 2,020 |
Robust Quantization: One Model to Rule Them All | 51 | neurips | 7 | 5 | 2023-06-16 15:10:23.653000 | https://github.com/moranshkolnik/RobustQuantization | 32 | Robust quantization: One model to rule them all | https://scholar.google.com/scholar?cluster=1861034670227893783&hl=en&as_sdt=0,5 | 6 | 2,020 |
Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming | 68 | neurips | 23 | 2 | 2023-06-16 15:10:23.846000 | https://github.com/deepmind/jax_verify | 126 | Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming | https://scholar.google.com/scholar?cluster=415562155937952325&hl=en&as_sdt=0,43 | 8 | 2,020 |
Federated Accelerated Stochastic Gradient Descent | 108 | neurips | 1 | 0 | 2023-06-16 15:10:24.038000 | https://github.com/hongliny/FedAc-NeurIPS20 | 12 | Federated accelerated stochastic gradient descent | https://scholar.google.com/scholar?cluster=17827059715585826187&hl=en&as_sdt=0,44 | 2 | 2,020 |
An analytic theory of shallow networks dynamics for hinge loss classification | 13 | neurips | 0 | 0 | 2023-06-16 15:10:24.230000 | https://github.com/phiandark/DynHingeLoss | 0 | An analytic theory of shallow networks dynamics for hinge loss classification | https://scholar.google.com/scholar?cluster=9155304244259608841&hl=en&as_sdt=0,41 | 1 | 2,020 |
Learning to Orient Surfaces by Self-supervised Spherical CNNs | 27 | neurips | 3 | 1 | 2023-06-16 15:10:24.422000 | https://github.com/CVLAB-Unibo/compass | 15 | Learning to orient surfaces by self-supervised spherical cnns | https://scholar.google.com/scholar?cluster=13771145081900249763&hl=en&as_sdt=0,24 | 9 | 2,020 |
Parabolic Approximation Line Search for DNNs | 12 | neurips | 3 | 1 | 2023-06-16 15:10:24.615000 | https://github.com/cogsys-tuebingen/PAL | 20 | Parabolic approximation line search for dnns | https://scholar.google.com/scholar?cluster=15049615666059175813&hl=en&as_sdt=0,11 | 8 | 2,020 |
Generative causal explanations of black-box classifiers | 49 | neurips | 10 | 0 | 2023-06-16 15:10:24.808000 | https://github.com/siplab-gt/generative-causal-explanations | 25 | Generative causal explanations of black-box classifiers | https://scholar.google.com/scholar?cluster=11533502889457597902&hl=en&as_sdt=0,34 | 5 | 2,020 |
Sub-sampling for Efficient Non-Parametric Bandit Exploration | 13 | neurips | 3 | 0 | 2023-06-16 15:10:25.002000 | https://github.com/DBaudry/Sub-Sampling-Dueling-Algorithms-Neurips20 | 10 | Sub-sampling for efficient non-parametric bandit exploration | https://scholar.google.com/scholar?cluster=15996804451950962772&hl=en&as_sdt=0,10 | 1 | 2,020 |
Learning under Model Misspecification: Applications to Variational and Ensemble methods | 52 | neurips | 2 | 0 | 2023-06-16 15:10:25.195000 | https://github.com/PGM-Lab/PAC2BAYES | 9 | Learning under model misspecification: Applications to variational and ensemble methods | https://scholar.google.com/scholar?cluster=12176489635076115022&hl=en&as_sdt=0,33 | 7 | 2,020 |
DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles | 71 | neurips | 13 | 0 | 2023-06-16 15:10:25.386000 | https://github.com/zjysteven/DVERGE | 54 | DVERGE: diversifying vulnerabilities for enhanced robust generation of ensembles | https://scholar.google.com/scholar?cluster=15783762290980425990&hl=en&as_sdt=0,5 | 1 | 2,020 |
Latent World Models For Intrinsically Motivated Exploration | 19 | neurips | 2 | 0 | 2023-06-16 15:10:25.579000 | https://github.com/htdt/lwm | 18 | Latent world models for intrinsically motivated exploration | https://scholar.google.com/scholar?cluster=5814046916674904026&hl=en&as_sdt=0,22 | 3 | 2,020 |
Training Generative Adversarial Networks by Solving Ordinary Differential Equations | 25 | neurips | 2,436 | 170 | 2023-06-16 15:10:25.771000 | https://github.com/deepmind/deepmind-research | 11,902 | Training generative adversarial networks by solving ordinary differential equations | https://scholar.google.com/scholar?cluster=5086997410208993615&hl=en&as_sdt=0,5 | 336 | 2,020 |
Learning of Discrete Graphical Models with Neural Networks | 6 | neurips | 0 | 0 | 2023-06-16 15:10:25.963000 | https://github.com/lanl-ansi/NeurISE | 0 | Learning of discrete graphical models with neural networks | https://scholar.google.com/scholar?cluster=17603472038483944187&hl=en&as_sdt=0,23 | 5 | 2,020 |
RepPoints v2: Verification Meets Regression for Object Detection | 86 | neurips | 49 | 14 | 2023-06-16 15:10:26.154000 | https://github.com/Scalsol/RepPointsV2 | 294 | Reppoints v2: Verification meets regression for object detection | https://scholar.google.com/scholar?cluster=14843700105251392523&hl=en&as_sdt=0,47 | 10 | 2,020 |
Unfolding the Alternating Optimization for Blind Super Resolution | 150 | neurips | 39 | 6 | 2023-06-16 15:10:26.346000 | https://github.com/greatlog/DAN | 204 | Unfolding the alternating optimization for blind super resolution | https://scholar.google.com/scholar?cluster=16834542650773066132&hl=en&as_sdt=0,10 | 5 | 2,020 |
Entrywise convergence of iterative methods for eigenproblems | 2 | neurips | 0 | 0 | 2023-06-16 15:10:26.539000 | https://github.com/VHarisop/entrywise-convergence | 0 | Entrywise convergence of iterative methods for eigenproblems | https://scholar.google.com/scholar?cluster=4848039311509999194&hl=en&as_sdt=0,5 | 3 | 2,020 |
Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views | 35 | neurips | 5 | 1 | 2023-06-16 15:10:26.731000 | https://github.com/NanboLi/MulMON | 16 | Learning object-centric representations of multi-object scenes from multiple views | https://scholar.google.com/scholar?cluster=5931711459859272834&hl=en&as_sdt=0,5 | 3 | 2,020 |
Self-supervised Co-Training for Video Representation Learning | 322 | neurips | 32 | 4 | 2023-06-16 15:10:26.923000 | https://github.com/TengdaHan/CoCLR | 274 | Self-supervised co-training for video representation learning | https://scholar.google.com/scholar?cluster=11310050495628333190&hl=en&as_sdt=0,5 | 13 | 2,020 |
Gradient Estimation with Stochastic Softmax Tricks | 53 | neurips | 5 | 2 | 2023-06-16 15:10:27.115000 | https://github.com/choidami/sst | 48 | Gradient estimation with stochastic softmax tricks | https://scholar.google.com/scholar?cluster=3158119995430472666&hl=en&as_sdt=0,38 | 2 | 2,020 |
Meta-Learning Requires Meta-Augmentation | 65 | neurips | 7,320 | 1,025 | 2023-06-16 15:10:27.307000 | https://github.com/google-research/google-research | 29,776 | Meta-learning requires meta-augmentation | https://scholar.google.com/scholar?cluster=14551438470205957966&hl=en&as_sdt=0,5 | 727 | 2,020 |
Improving GAN Training with Probability Ratio Clipping and Sample Reweighting | 20 | neurips | 5 | 3 | 2023-06-16 15:10:27.499000 | https://github.com/Holmeswww/PPOGAN | 24 | Improving gan training with probability ratio clipping and sample reweighting | https://scholar.google.com/scholar?cluster=1603102881023302087&hl=en&as_sdt=0,5 | 3 | 2,020 |
On Testing of Samplers | 7 | neurips | 1 | 4 | 2023-06-16 15:10:27.691000 | https://github.com/meelgroup/barbarik | 11 | On testing of samplers | https://scholar.google.com/scholar?cluster=5212652190142141590&hl=en&as_sdt=0,31 | 5 | 2,020 |
Gaussian Process Bandit Optimization of the Thermodynamic Variational Objective | 4 | neurips | 0 | 0 | 2023-06-16 15:10:27.883000 | https://github.com/ntienvu/tvo_gp_bandit | 1 | Gaussian process bandit optimization of the thermodynamic variational objective | https://scholar.google.com/scholar?cluster=4199760950647121080&hl=en&as_sdt=0,5 | 2 | 2,020 |
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers | 503 | neurips | 1,867 | 362 | 2023-06-16 15:10:28.076000 | https://github.com/microsoft/unilm | 12,770 | Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers | https://scholar.google.com/scholar?cluster=14860866195704248914&hl=en&as_sdt=0,6 | 260 | 2,020 |
Woodbury Transformations for Deep Generative Flows | 12 | neurips | 0 | 0 | 2023-06-16 15:10:28.271000 | https://github.com/yolu1055/WoodburyTransformations | 2 | Woodbury transformations for deep generative flows | https://scholar.google.com/scholar?cluster=5314675607084921976&hl=en&as_sdt=0,5 | 4 | 2,020 |
Graph Contrastive Learning with Augmentations | 864 | neurips | 90 | 27 | 2023-06-16 15:10:28.466000 | https://github.com/Shen-Lab/GraphCL | 434 | Graph contrastive learning with augmentations | https://scholar.google.com/scholar?cluster=9963871328827947371&hl=en&as_sdt=0,33 | 9 | 2,020 |
Gradient Surgery for Multi-Task Learning | 483 | neurips | 37 | 12 | 2023-06-16 15:10:28.658000 | https://github.com/tianheyu927/PCGrad | 255 | Gradient surgery for multi-task learning | https://scholar.google.com/scholar?cluster=15639381935804051305&hl=en&as_sdt=0,5 | 18 | 2,020 |
Bayesian Probabilistic Numerical Integration with Tree-Based Models | 2 | neurips | 1 | 0 | 2023-06-16 15:10:28.850000 | https://github.com/ImperialCollegeLondon/BART-Int | 7 | Bayesian probabilistic numerical integration with tree-based models | https://scholar.google.com/scholar?cluster=17070012166494581814&hl=en&as_sdt=0,5 | 3 | 2,020 |
Graph Meta Learning via Local Subgraphs | 99 | neurips | 28 | 3 | 2023-06-16 15:10:29.042000 | https://github.com/mims-harvard/g-meta | 105 | Graph meta learning via local subgraphs | https://scholar.google.com/scholar?cluster=12205589678815319348&hl=en&as_sdt=0,33 | 6 | 2,020 |
Stochastic Deep Gaussian Processes over Graphs | 12 | neurips | 4 | 2 | 2023-06-16 15:10:29.239000 | https://github.com/naiqili/DGPG | 21 | Stochastic deep gaussian processes over graphs | https://scholar.google.com/scholar?cluster=12355545301307730680&hl=en&as_sdt=0,5 | 1 | 2,020 |
Evaluating Attribution for Graph Neural Networks | 73 | neurips | 16 | 2 | 2023-06-16 15:10:29.431000 | https://github.com/google-research/graph-attribution | 65 | Evaluating attribution for graph neural networks | https://scholar.google.com/scholar?cluster=8947730950192198028&hl=en&as_sdt=0,5 | 7 | 2,020 |
Neuron Shapley: Discovering the Responsible Neurons | 73 | neurips | 4 | 1 | 2023-06-16 15:10:29.623000 | https://github.com/amiratag/neuronshapley | 21 | Neuron shapley: Discovering the responsible neurons | https://scholar.google.com/scholar?cluster=17071194082042236550&hl=en&as_sdt=0,5 | 3 | 2,020 |
Stochastic Normalizing Flows | 93 | neurips | 9 | 1 | 2023-06-16 15:10:29.816000 | https://github.com/noegroup/stochastic_normalizing_flows | 56 | Stochastic normalizing flows | https://scholar.google.com/scholar?cluster=16849056708118710462&hl=en&as_sdt=0,22 | 5 | 2,020 |
Revisiting Parameter Sharing for Automatic Neural Channel Number Search | 26 | neurips | 7 | 0 | 2023-06-16 15:10:30.008000 | https://github.com/haolibai/APS-channel-search | 20 | Revisiting parameter sharing for automatic neural channel number search | https://scholar.google.com/scholar?cluster=13186156999876305193&hl=en&as_sdt=0,1 | 3 | 2,020 |
Differentially-Private Federated Linear Bandits | 78 | neurips | 3 | 1 | 2023-06-16 15:10:30.201000 | https://github.com/abhimanyudubey/private_federated_linear_bandits | 3 | Differentially-private federated linear bandits | https://scholar.google.com/scholar?cluster=10188063075897991616&hl=en&as_sdt=0,5 | 1 | 2,020 |
Deep Graph Pose: a semi-supervised deep graphical model for improved animal pose tracking | 31 | neurips | 10 | 18 | 2023-06-16 15:10:30.394000 | https://github.com/paninski-lab/deepgraphpose | 29 | Deep Graph Pose: a semi-supervised deep graphical model for improved animal pose tracking | https://scholar.google.com/scholar?cluster=3453822722256675361&hl=en&as_sdt=0,44 | 7 | 2,020 |
Sparse Symplectically Integrated Neural Networks | 20 | neurips | 1 | 0 | 2023-06-16 15:10:30.586000 | https://github.com/dandip/ssinn | 8 | Sparse symplectically integrated neural networks | https://scholar.google.com/scholar?cluster=14798517979957496479&hl=en&as_sdt=0,15 | 2 | 2,020 |
Continuous Object Representation Networks: Novel View Synthesis without Target View Supervision | 16 | neurips | 2 | 0 | 2023-06-16 15:10:30.778000 | https://github.com/nicolaihaeni/corn | 14 | Continuous object representation networks: novel view synthesis without target view supervision | https://scholar.google.com/scholar?cluster=765897047698290451&hl=en&as_sdt=0,5 | 2 | 2,020 |
Multimodal Generative Learning Utilizing Jensen-Shannon-Divergence | 41 | neurips | 4 | 2 | 2023-06-16 15:10:30.970000 | https://github.com/thomassutter/mmjsd | 13 | Multimodal generative learning utilizing jensen-shannon-divergence | https://scholar.google.com/scholar?cluster=17836611088871038657&hl=en&as_sdt=0,5 | 1 | 2,020 |
Solver-in-the-Loop: Learning from Differentiable Physics to Interact with Iterative PDE-Solvers | 134 | neurips | 23 | 0 | 2023-06-16 15:10:31.162000 | https://github.com/tum-pbs/Solver-in-the-Loop | 122 | Solver-in-the-loop: Learning from differentiable physics to interact with iterative pde-solvers | https://scholar.google.com/scholar?cluster=2286766760551989039&hl=en&as_sdt=0,32 | 6 | 2,020 |
Optimal Adaptive Electrode Selection to Maximize Simultaneously Recorded Neuron Yield | 4 | neurips | 1 | 0 | 2023-06-16 15:10:31.356000 | https://github.com/pesaranlab/neuro_cbs | 6 | Optimal adaptive electrode selection to maximize simultaneously recorded neuron yield | https://scholar.google.com/scholar?cluster=3860529084779056549&hl=en&as_sdt=0,10 | 3 | 2,020 |
Neurosymbolic Reinforcement Learning with Formally Verified Exploration | 50 | neurips | 2 | 5 | 2023-06-16 15:10:31.548000 | https://github.com/gavlegoat/safe-learning | 13 | Neurosymbolic reinforcement learning with formally verified exploration | https://scholar.google.com/scholar?cluster=16428305531344128935&hl=en&as_sdt=0,5 | 2 | 2,020 |
On 1/n neural representation and robustness | 19 | neurips | 1 | 0 | 2023-06-16 15:10:31.742000 | https://github.com/josuenassar/power_law | 7 | On 1/n neural representation and robustness | https://scholar.google.com/scholar?cluster=14612770369819484609&hl=en&as_sdt=0,14 | 3 | 2,020 |
Boundary thickness and robustness in learning models | 27 | neurips | 1 | 0 | 2023-06-16 15:10:31.934000 | https://github.com/nsfzyzz/boundary_thickness | 17 | Boundary thickness and robustness in learning models | https://scholar.google.com/scholar?cluster=7743383416741324781&hl=en&as_sdt=0,14 | 2 | 2,020 |
Demixed shared component analysis of neural population data from multiple brain areas | 0 | neurips | 1 | 1 | 2023-06-16 15:10:32.126000 | https://github.com/yu-takagi/dSCA | 10 | Demixed shared component analysis of neural population data from multiple brain areas | https://scholar.google.com/scholar?cluster=14678847289626964830&hl=en&as_sdt=0,34 | 3 | 2,020 |
Learning Kernel Tests Without Data Splitting | 14 | neurips | 2 | 0 | 2023-06-16 15:10:32.320000 | https://github.com/MPI-IS/tests-wo-splitting | 5 | Learning kernel tests without data splitting | https://scholar.google.com/scholar?cluster=12039043020526096218&hl=en&as_sdt=0,14 | 3 | 2,020 |
Unsupervised Data Augmentation for Consistency Training | 1,590 | neurips | 313 | 71 | 2023-06-16 15:10:32.512000 | https://github.com/google-research/uda | 2,122 | Unsupervised data augmentation for consistency training | https://scholar.google.com/scholar?cluster=12880251999793471515&hl=en&as_sdt=0,22 | 44 | 2,020 |
Pruning neural networks without any data by iteratively conserving synaptic flow | 337 | neurips | 42 | 4 | 2023-06-16 15:10:32.704000 | https://github.com/ganguli-lab/Synaptic-Flow | 190 | Pruning neural networks without any data by iteratively conserving synaptic flow | https://scholar.google.com/scholar?cluster=1210718401723821316&hl=en&as_sdt=0,39 | 27 | 2,020 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.